AI Risks: Establishing Workplace Guidelines

AI is rapidly changing the way we work, with AI-powered tools now in use by employees and contractors to manage tasks, retrieve data, brainstorm, and identify fraud. They have been adopted by HR to recruit and screen talent, by sales to provide customer support, by finance to analyze volumes of data and forecast future performance, and by business development, IT, legal, and marketing teams to conduct data analytics, draft copy, and prepare reports. AI tools include everything from our familiar Google internet search to Amazon and Alexa, large language models (LLMs) like ChatGPT, chat bots, and generative AI like that found in Bing Image Creator.

But machines come with limitations. Most importantly, they lack empathy along with qualitative, surrounding, and cultural context. Sure, AI is capable of generating content but that is only based on data already in existence. It is not imaginative, discerning, or forward thinking. Drawbacks include the potential for misinformation, algorithm manipulation, learned bias, and a loss of transparency along with regulatory noncompliance and copyright infringement. More importantly, if employees input sensitive or confidential data into any AI prompt, that information can be shared with all other users of the platform and beyond.

AAFCPAs advises that clients assess their use of AI along with legal and regulatory requirements within their industry and environment. Establish written usage guidelines that are focused on privacy and data security, confidentiality, and anti-discrimination standards. This policy should outline responsible and acceptable AI use including guidelines for implementation and monitoring. Prohibitions might include the use of confidential, personally identifiable, or client data in LLMs along with any use of AI-powered tools when making hiring, promotion, discipline, demotion, or termination decisions.

Consider how users might expand AI in the future. Could AI tools be integrated into other solutions, such as CRM? Do you operate in multiple states where federal and state regulations on AI differ? How should users verify data retrieved from LLMs? What are acceptable and not acceptable uses within your organization?

Once policies are in place, awareness is key. Open a discussion about these new tools, explain any risks, encourage responsible use, conduct training, and make sure everyone is on the same page including all employees, contractors, and authorized users. As with all policies, consider this a living, evolving document.

AAFCPAs provides Robotic Process Automation solutions, including guidance on written usage guidelines for emerging technologies.

If you have questions, please contact Vassilis Kontoglis, Partner, Analytics, Automation & IT Security at 774.512.4069 or—or your AAFCPAs Partner.

About the Author

Vassilis is a leader in AAFCPAs’ Business Process & IT Consulting Practice. He has 20+ years’ proven experience providing business intelligence, productivity, information risk management, and cybersecurity solutions. He is a critical resource in keeping clients and the firm on the forefront of transformative technologies while mitigating risks that come along with these advancements. Vassilis leads the delivery of Robotic Process Automation solutions at AAFCPAs. He understands the unique requirements to achieve RPA success, including proper design, planning, implementation, and governance. He works collaboratively with clients and cross-functional teams, and leverages his deep understanding of enterprise information systems, business logic, and structured inputs to automate rote processes and increase operational efficiency. Vassilis is also the leader of AAFCPAs’ automation center of excellence (CoE), an internal team that streamlines automation output, provides structure, and helps scale automation through the firm.