In my article Artificial Intelligence (AI) for Nonprofits – The End of the Beginning, I discussed the current state of AI and how nonprofit organizations harness it. In this article, I turn to the high-level considerations that organizations should address before adopting AI on a broad level and recommendations for adopting best practices when employing AI.
High-Level Considerations
Before adopting AI widely, organizations should consider its potential implications. While AI can offer significant benefits, it also presents challenges and ethical concerns, as discussed below.
Training
As a new technology, there will likely be a learning curve. Organizational leaders should educate themselves on AI to make informed decisions about integrating it. Staff must be trained to use it properly in accordance with policy and guidelines.
Data Privacy
AI relies heavily on data, and managing sensitive donor and beneficiary information raises ethical and privacy concerns. Nonprofits must adhere to data privacy laws and implement robust security measures to safeguard sensitive information while upholding donor trust.
Data Quality and Bias
AI algorithms mirror the data they are trained on, which may contain bias and perpetuate discrimination and inequality. As the saying goes, “garbage in, garbage out.” It’s essential to exercise caution when making decisions based on AI data analysis to ensure that ethical AI practices are followed.
Impact Assessment
It is important to continuously assess how AI impacts the organization’s programs, services, and beneficiaries. This is vital to ensure that AI is being effectively implemented and utilized. Since AI is constantly evolving, it may be necessary to adjust strategies if the technology is not achieving its intended outcomes.
Risk Management
AI presents potential risks, such as damage to reputation from AI errors. Another risk is security breaches from hackers targeting AI. An organization’s “Incident Response Plan” should address these risks posed by AI.
Best Practice – Adopt an AI Governance Framework
Like an information governance policy, an AI governance policy will guide organizations in adopting and using AI, providing guideposts for its responsible and ethical use.
The policy framework should include the following elements:
- Ensure the AI policy aligns with the nonprofit’s mission and core values. Clearly define the policy’s purpose and scope, including the AI initiatives and applications covered.
- Remember to address “change management” issues—ensuring employees understand that AI will help them work more creatively and strategically, not take their jobs.
- Define the roles and responsibilities of individuals, board committees, and staff members responsible for oversight and implementation.
- Plan to train the board and staff on AI ethics, responsible AI practices, and the organization’s AI policy.
- Explain how the organization will interact with the community, including donors, volunteers, funders, and vendors. Collect feedback and address their concerns about AI initiatives.
- Outline the process for reviewing, updating, and approving changes to the AI policy, involving relevant parties as necessary.
- Articulate the ethical principles that guide AI initiatives, such as fairness, transparency, accountability, privacy, and non-discrimination. Emphasize the organization’s commitment to identifying and mitigating bias in AI algorithms and data sources.
- Address how the organization will ensure compliance with AI-related laws and regulations, including data protection laws, and describe how the organization stays informed about changes in legal requirements.
- Develop a timeline for the AI policy and initiative implementation stages. It’s advisable to take small steps.
The Don’ts of AI
One of artificial intelligence’s (AI) main limitations is its lack of “consciousness,” which can result in inaccurate or even entirely false information. A significant concern is AI’s ability, and some might argue its tendency, to create realistic images and generate fake news or stories. Furthermore, like humans, AI can have inherent biases based on the data it receives, leading to skewed responses and analyses. If the input data contains biases, these biases will also be present in the output. Considering this, it’s essential for nonprofits and all users and organizations to be mindful of how they utilize AI. Here are some practical guidelines.
Don’t Plagiarize
AI systems are constantly improving, but we should keep in mind that AI-generated content may include inaccuracies or contradictions, which can be seen as ‘hallucinations’. Since AI relies on existing content or generates its own, it’s best to avoid directly copying and pasting whatever it creates. It’s also important to consider the risk of unintentionally using copyrighted content, especially in the case of images, which can violate third-party intellectual property rights.
Check the Facts
Similar to the previous comment, it is essential to always verify the truth and accuracy of AI-generated content.
Caution with Search Engine Optimization (SEO)
Google and other popular search engines can detect AI-generated content and may flag it as spam, negatively impacting search engine ranking efforts.
Keep Control of “Brand Voice”
While AI can provide helpful content suggestions and even generate new ideas for brand positioning, it cannot replace the brand expertise and organizational knowledge of your internal and external resources.
Know your License!
It’s important to remember that AI platforms have specific licenses and terms of use that govern their usage. These licenses offer different rights and limitations, and some may not allow for commercial use of the AI output. This means that if an organization uses the AI output for commercial purposes, they are violating the license. Before using AI internally, it’s crucial to carefully read the applicable license and ensure that your planned use aligns with the rights granted to you.