When AI works effectively
Successful business data intelligence within the public sector demands and demonstrates the highest degree of accuracy, a level of precision that promotes unbiased, reliable, relevant, and real-time information within a secure environment. Algorithms, scenarios, and data governance, as well as cybersecurity technologies, are essential to add value to a working AI system.
The potentially overwhelming journey of introducing and implementing AI is necessary in making our communities sustainable, streamlined, safer, and more efficient in providing value-added services to all stakeholders. Secure and accurate payment systems, effective utility management, efficient and customer-focused transportation routes and services, focused law enforcement efforts, and employment functions are just a few of the activities governments perform for their constituents. Generally, the transparency requirements within the public sector are greater than in the private sector. Simply stated, the main expectations of the stakeholders (i.e., citizens, business partners, banks, rating agencies) centre around cost effectiveness. In the local and state government arenas, expenditures are often subjected to media and special interest scrutiny and put to a vote by the citizens or their elected representatives.
People are more likely to balk at an unknown and misunderstood technology like AI once they have seen the price tag. However, government officials may want to perform a cost-benefit analysis of the long-term cost of not utilising AI, which could outweigh the dollar cost of development and implementation.
This report illustrates real-world AI implementation successes in the public sector and demonstrates the benefits of AI — while considering the cost-versus-benefits challenges — and concrete steps to help deliver AI to the public sector.
Missouri Utility encourages ChatGPT
During CPA Week in Missouri, the state society encouraged CPAs to visit high school accounting classes to help promote the profession and generate interest in accounting as a career path. At a recent high school accounting class visit, a student posed an insightful question, ‘Will there be jobs in accounting with future advancements in technology?’ This basic question underscores how many CPAs, and the finance and accounting profession in general, should approach AI. At City Utilities of Springfield, the accounting and finance department is in the exploratory stage of this new technology. They are testing ChatGPT, the chatbot developed by OpenAI, to see how it helps with simple queries or developing an outline for ideas. It can be used to draft an email on a specific topic or customer response, thus allowing the user to spend less time drafting communications and more time focusing on refining the final product.
AI can also be used to develop formulas in Excel. For example, here is a recent query that was used to show how AI can demonstrate how a specific Excel formula can be built. This was used in a departmental meeting at City Utilities to illustrate how it can help with something that might have otherwise been googled. This shows how generative AI can provide a different way to generate a how-to guide.1
____________________________________________________________________________________________________________________________________________________________________________________________1 This screenshots in this section of the report were captured by report author Jeff Parkison while using OpenAI’s ChatGPT.
Another way City Utilities’ employees are finding efficiencies is in helping develop potential and relevant interview questions for potential candidates for employment:
In a world where we use ‘Google’ as a verb, a web search is often our first stop to quickly find answers or information, but those results are often littered with advertisements and can be skewed by algorithms. ChatGPT and other technologies have the potential to effectively and efficiently data mine and analyse large bodies of information and summarise their findings in a clear format, making it easier to understand recommendations or ideas. The algorithms behind these tools will build knowledge as they are fed information; however, even as AI tools grow in depth of knowledge, their outputs will still need to be fact checked.
12 considerations for success of AI in Washoe County, Nevada, USA
The county of Washoe outside of Reno, Nevada, has revolutionised the way AI is utilised in their government operations, thanks to Chief Information Officer Behzad Zamanian. An article in Government Technology outlines how public sector agencies can take what they learned from the rise of the internet and search engines and apply it to AI as a tool to deliver better services.2
____________________________________________________________________________________________________________________________________________________________________________________________2 Behzad Zamanian, ’12 Steps Local Governments Can Take to Successfully Use AI’, GovTech, July 21, 2023.3 Ibid.
These 12 considerations include:
Ethical implications of using the internet
Regulation and governance
Education of users
Interagency collaboration and partnerships
Privacy and security
Equity, inclusivity, and accessibility
Planning
Analytics
Talent and the way of working
Pilot projects
Legal and regulatory considerations
Community engagement
The article highlights the ethical considerations that must be addressed to ensure that AI systems are used responsibly and do not violate people’s rights.3
Los Angeles County uses AI to assist in the homelessness crisis
In 2023, the Los Angeles Housing Services Authority (LAHSA) reported 75,518 people experiencing homelessness in Los Angeles County, which marks an increase of 9% from 2022.4 The same report noted 46,260 people experiencing homelessness in the city of Los Angeles in 2023, an increase of 10% from 2022.5 The number of unhoused individuals is increasing nationwide, and the impact is compounded in many major cities. As the number of homeless people increases, the lack of housing, the high cost of living, and difficult transitions from institutions or health care facilities back into society is resulting in an unprecedented increase in this population.
Using AI to identify those on the verge of homelessness allows for intervention before these individuals either return to the streets or enter them for the first time. The idea is prevention. The California Policy Lab (CPL) at UCLA created a predictive analytics AI tool that uses data from existing systems to identify and proactively connect at-risk individuals with services needed to obtain housing stability. This tool uses data on those seeking food stamps, addiction and recovery services, housing assistance, and mental health services to identify at-risk individuals. Once the initial identification is complete, LA’s Homelessness Prevention Unit contacts people who have been identified.
____________________________________________________________________________________________________________________________________________________________________________________________4 Elysee Barakett, ‘2023 Homeless Count Shows a 9% Rise in People Experiencing Homelessness in LA County’, NBC Los Angeles, June 29, 2023.5 Ibid.
The tool takes into account ‘everything from who enrolls in food assistance, to who's in the emergency department or who has treatment for substance use or public mental health services,’ said CPL Executive Director Janey Rountree. ‘Artificial intelligence is a really new and emerging field, and I think we're focused on how that type of science can really help people and help the county serve people who are experiencing homelessness.’6
This program has been in effect since July 2021 and is still in operation today. In the two years that data has been available, the program has worked with 560 individuals, of whom ‘a large majority have stayed housed so far’.7 As with any new program, many questions about measurable effectiveness have come up, and study results are not expected to be published until 2026. However, although this program is new, other cities and counties across the country have been implementing the same processes with good initial results. Because this example of prevention beyond mere identification and response demonstrates the possible positive effects of AI in the public sector, many are waiting to see if this use of AI continues to grow.
____________________________________________________________________________________________________________________________________________________________________________________________6 Rob Hayes, 'How LA County Is Using AI to Help Solve Homeless Crisis', ABC 7 Los Angeles, October 31, 2023.7 Jennifer Ludden, ‘Los Angeles Is Using AI to Predict Who Might Become Homeless and Help Before They Do’, NPR, October 4, 2023.
The City of San Jose, California, develops AI guidelines
The information technology department for the city of San Jose, California, has begun a collaborative process to develop the city’s AI policy. To kickstart the process, the city issued seven key guidelines to assist the AI working group with policy development. The first guideline — ‘Information you enter into generative AI systems could be subject to a Public Records Act request’ — has relevance to many governments that may be looking to implement AI.8
Confidentiality, particularly related to personally identifiable information, is a top priority for any entity in creating an AI platform relevant to all stakeholders. But this is especially true for governments. Because government agencies, states, and municipalities handle vast amounts of sensitive data — including personal records, financial information, and health records — any AI they use must have appropriate measures in place to protect citizens’ privacy and prevent unauthorised access to sensitive information.
Beyond the confidentiality concerns, fairness, transparency, and ethical use of AI technologies are also vitally important. Bias and fairness must be considered, and the historical data utilised should be examined to ensure governments minimise, or even eradicate, unfair outcomes and avoid perpetuating existing biases. Bias could be implicit, where data is unintentionally skewed within AI systems due to the historical or societal context in which they are developed. But there can also be sampling bias if data is not representative of the population, or temporal bias, where change over time is not accounted for.
This means that the algorithms that affect decision-making must strike a balance between transparency and maintaining the confidentiality of sensitive information. Citizens should have some level of insight into how AI decisions are made and what data they are based on, without compromising privacy. Confidentiality, impartiality, transparency, and safe operations are crucial to maintaining public trust and protecting citizens' rights.
____________________________________________________________________________________________________________________________________________________________________________________________8 ‘Information Technology Department Generative AI Guidelines’, City of San Jose, 2023.
As part of a holistic approach to AI implementation, governments should assess the vulnerability of their systems to cyberattacks, including potential manipulation of the models leading to incorrect decision-making. A secure infrastructure, including regular audits and threat detection mechanisms, is key. To support accountability within the system, clear guidelines within legal frameworks are necessary and should define liabilities and ensure transparency.
Many considerations are necessary in implementing AI. A key factor is training personnel to work effectively with AI. AI is intended to augment human capabilities, not replace them, and balancing automation with human judgment is essential for optimal outcomes. The long-term impacts of AI within the government entity, the economy, and society require regular assessments and adjustments. Policies should be developed, reviewed, and revised on a regular basis. As regulations continue to develop, the ability to implement policies and procedures that ensure compliance with these regulations is required to maintain transparency and public trust.
The necessity of addressing confidentiality, security, training, governance, and procedures, among other factors, cannot be emphasised enough when developing AI and generative AI platforms. As a trusted fiduciary of private and protected data, it is vital to remember not to provide personal and private information to AI tools as doing so could make that data public. However, there are many uses of AI than can help governments boost efficiency and efficacy without putting personal or private information at risk.
City of San Jose: Seven key AI guidelines
1.
Information you enter into Generative AI systems could be subject to a Public Records Act (PRA) request, may be viewable and usable by the company, and may be leaked unencrypted in a data breach. Do not submit any information to a Generative AI platform that should not be available to the general public (such as confidential or personally identifiable information).
2.
Review, revise, and fact check via multiple sources any output from a Generative AI. Users are responsible for any material created with AI support. Many systems, like ChatGPT, only use information up to a certain date (e.g., 2021 for ChatGPT).
3.
Cite and record your usage of Generative AI. See how and when to cite in the “Citing Generative AI” section. Record when you use Generative AI through [official form].
4.
Create an account just for City use to ensure public records are kept separate from personal records. See “Getting started with Generative AI for City use.” If a user agrees to the terms and conditions of a system that the City does not have a formal agreement with, he/she is responsible for complying with those terms and conditions.
5.
Departments may provide additional rules around Generative AI. Consult your manager or department contact if there are additional department-specific rules.
6.
Refer to this document quarterly, as guidance will change with the technology, laws, and industry best practices. Check the “Change Log” to identify changes.
7.
Users are encouraged to participate in the City’s established workgroups to help advance AI usage best practice in the City and enhance the Guidelines. See “Joining AI Working Group” section.9
____________________________________________________________________________________________________________________________________________________________________________________________9 ‘Information Technology Department Generative AI Guidelines’, City of San Jose, 2023.Note: Hyperlinks have been removed from quoted text.
Stay vigilant
Despite its potential capabilities, the use of AI shouldn’t preclude us from being aware and rational. Recently, in Hong Kong, an employee was duped into transferring over $25 million to a cybercriminal after receiving instructions they believed to be from the organisation’s chief financial officer (CFO). The employee followed common guidance to verbally verify an email with instructions to make nonstandard transfers. The perpetrator, however, used deepfakes (images, videos, or audio snippets that have been digitally altered) to create a video conference call with what appeared to be the CFO confirming the transfer. The ability to use video meetings to perpetrate fraud requires us to be vigilant in knowing who we are speaking with, even when it seems apparent that we know the person on the other end of the call.
Governments must collaborate, learn, and adapt to ensure AI serves the public interest effectively and ethically. Beyond the items discussed in previous sections, scenario testing and stress testing should be conducted. This should include edge cases, extreme conditions, and assessing how AI handles unexpected data. Governments should also consider collaborating and consulting with external experts, including academics, researchers, and industry specialists, outside of the public sector. By attending conferences and workshops, along with collaborating on research, officials can define weaknesses, identify vulnerabilities, and improve defences.
Edge cases refer to situations or scenarios that fall outside the typical or expected behaviour of a system, process, or algorithm. These cases are often at the boundaries or extremes of what the system is designed to handle. In software development, edge cases are important to consider because they can reveal vulnerabilities, unexpected behaviours, or limitations. For example, testing how an application handles very large or very small inputs would involve examining edge cases. Similarly, in legal contexts, edge cases might involve unusual circumstances that challenge existing laws or precedents. Overall, understanding and addressing edge cases is crucial for robust and reliable systems.