top of page
  • Adam Martel

Assessing Risks Associated with Autonomous AI

Evaluating the risks associated with autonomous fundraising differs significantly from typical SaaS security assessments. This idea that evaluating technology, not just based on its security, but also on its independent and autonomous actions is a brave new world requiring a different set of requirements. Organizations must evaluate both the security of the technology and the unique risks and rewards they should expect. To ensure an accurate assessment, the evaluation of autonomous fundraising must go beyond standard security measures to capture these distinct considerations.


Based on conversations with hundreds of organizations, our team has compiled key questions from security assessments and those we believe organizations should ask about autonomous AI security, assessment, and specific risks, based on our experience with hundreds of organizations.


Beyond traditional data security risks, new concerns arise with independent, two-way AI communication. For example, what if a donor shares sensitive information like PII, credit card data, or PHI with an autonomous fundraiser? Does the AAI store this data? While traditional fundraisers follow strict policies to avoid storing sensitive information, the current autonomous fundraiser has limited understanding of PII—something we are actively addressing.


To address these concerns, we are proud to release the first version of our assessment standards toolkit to help organizations establish a baseline for assessing the risks of working with AI companies, including ours. The toolkit includes an AI Security Assessment, an Internal AI Use Policy Template, and AI/ML Security checklists, providing a foundation for building policies and procedures to evaluate the risks of autonomous AI.

The toolkit covers data security, reputational risk, accessibility protocols, organizational readiness, and more. As the first to develop autonomous fundraisers, we aim to help organizations establish thoughtful policies for the use of autonomous AI.



Last week our team continued evolving the onboarding processes for partners. We’re exploring how to effectively train the autonomous fundraiser using data (websites, videos, emails, etc.) to make the AAI reflective of the specific characteristics of each individual organization.


This week we’re hyper-focused on R&D for interactive avatars to enhance the donor’s experience. VEOs will be able to answer donor questions about events, past giving and other ways to engage with the organization in real time in an environment like a Zoom screen that they are familiar with. Trust and transparency will be at the forefront of our approach to this type of interaction. Our goal is to find ways for technology to enhance the donor’s experience.

Comentarios


bottom of page