AI Chatbots and Deepfake Awareness: Navigating Risks and Training Your Team
During AAFCPAs’ recent Nonprofit Seminar (April 2025), Ryan K. Wolff, MBA and Vassilis Kontoglis presented practical guidance to more than 550 attendees on the responsible use of AI tools in the workplace including risk mitigation strategies, security policy considerations, and how to recognize and respond to deepfake threats. This article offers select takeaways for nonprofit leaders and is designed to complement the full webcast.
AI tools are entering the workplace faster than most policies are written. For nonprofits, this raises practical questions: Which tools offer real value? What are the risks to security, data, reputation, and decision-making? And how do you help your team use these technologies wisely? Whether automating routine tasks or filtering misinformation, responsible use begins with understanding both the possibilities and the limits.
Webinar On-Demand
AI Chatbots Are Tools, Not Advisors
As interest in AI continues to grow, so does the pressure to find practical applications. Leaders are asking where these tools may offer real value—and where boundaries should be drawn. The answer lies not in the technology itself but in how organizations choose to deploy it.
Chatbots have earned a place in the workplace but not at the decision-making table. While they can accelerate drafting, streamline communications, or even suggest new approaches to a problem, they do not understand the context or stakes of your work. They respond based on patterns, information available to them, and model training—not judgement or ethics. And that distinction matters when the output affects operations, compliance, or public trust.
The value of these tools is expanding well beyond communications. Today’s AI solutions are assisting with search and knowledge retrieval, data extraction and modeling, inspection and cleansing, project documentation, personalized reporting, and collaborative problem-solving. For nonprofits, which often operate with lean teams and complex responsibilities, this support can reduce manual work and improve response times. Think donor communications, yes—but also internal FAQs, financial summaries, grant data cleanup, or early-stage policy templates. The key is not whether the tool can help, but how and where it should be applied. Human oversight remains essential. Every output requires review by someone who understands the organization’s goals, audience, and legal obligations.
Establish Security Standards and Guardrails Before Adoption
Before your organization puts any AI tool to work, it is critical to establish clear boundaries for its use. That means more than flagging sensitive data or cautioning against plagiarism. It means defining what problems the tool is meant to solve and what decisions still belong to people.
A well-considered policy should address where AI tools may assist, who is responsible for reviewing their output, and how transparency will be maintained. For example, if a chatbot is used to draft a public-facing document, stakeholders should know that the content originated from an AI model and was reviewed before publication. Policies should also make room for gray areas: What happens when someone misuses the tool, even unintentionally? Who decides what constitutes misuse?
Organizations that take the time to create guidelines tailored to their structure and mission will be better positioned to use these tools effectively without risking missteps that could erode trust or create liability. Guardrails are not about slowing down innovation. They are about mitigating risks.
Train Staff Early and Often
Most AI-related problems do not start with the software. They start with a lack of understanding about how the tools work, what their limitations are, and where human oversight is still essential. That is why staff training should be a central part of any AI adoption strategy—not a follow-up step once tools are already in use.
Training does not need to be technical. In fact, it should be practical. Help staff recognize when AI-generated content may be unreliable, especially in high-stakes areas like finance, compliance, or public communications. Encourage healthy skepticism and offer examples of what good review practices look like.
Consider training as part of a broader culture-building effort. If the message is only about what not to do, employees may hesitate to use the tools at all. But if the message is about responsible experimentation and shared accountability, you are more likely to get thoughtful, informed use across departments. This is not about policing behavior; it is about equipping people to make sound decisions.
Deepfakes Are Getting Harder To Spot
The technology behind deepfakes has advanced faster than many organizations realize. With just a few minutes of video or audio, bad actors may now create convincing imitations of trusted individuals. These fakes may show up in emails, voicemails, or even live video calls, especially during high-pressure scenarios like wire transfers, contract approvals, or vendor changes. A fake expense receipt was demonstrated from a real restaurant with real menu items and prices. How would someone identify this as fake? Processes need to be adjusted to the new era.
Relying on gut instinct or visual cues is no longer enough. Today’s deepfakes are not necessarily glitchy or poorly synced. They may be polished and persuasive, especially when designed to exploit urgency or routine.
Building awareness is key. Train staff to pause and verify, especially when a request seems unusual or out of character. Encourage the use of multi-factor authentication (MFA) for sensitive requests, and make sure people know they have permission to slow things down if something feels off. It is also wise to establish internal code words or backchannels for high-risk communication—simple steps that can make a real difference when seconds matter.
Training Should Match How People Actually Work
Many organizations deliver cybersecurity training through annual modules or video tutorials that are disconnected from day-to-day tasks. While those efforts check the compliance box, they rarely change behavior. If the goal is to reduce risk, training must feel relevant and easy to apply.
Think collaborative learning built into real workflows, for example: short, scenario-based exercises delivered in team meetings, role-play discussions that simulate phishing attempts or deepfake encounters, and clear examples of what to do, not just what to avoid. Training should focus on decision points that arise during routine operations, when urgency or habit might otherwise override caution.
Managers and department heads play an essential role in reinforcing these habits. They can help normalize double-checking and encourage open discussion when something seems off. Security goes beyond IT. It is a shared responsibility rooted in culture, and that begins by helping people recognize how these tools and threats relate to the work already underway.
For guidance tailored to your organization, AAFCPAs’ Advisory practice is available to support your team.
These insights were contributed by Vassilis Kontoglis, Partner, AI Digital Transformation & Security and Ryan K. Wolff, MBA, AI & Strategic Innovation Consultant. Questions? Reach out to our authors directly or your AAFCPAs partner. AAFCPAs offers a wealth of resources on technology risk assessment. Subscribe to get alerts and insights in your inbox.