Responsible AI Exploration
At LogicalOutcomes, we're always looking for ways to improve our work. Artificial Intelligence (AI) offers incredible efficiency, and we see potential for it in every project we undertake.
Our position allows us to connect organizations with AI solutions, particularly in evaluation capacity building. We understand the concerns around AI, including unpredictability and data security. We're committed to being transparent about these risks.
Our approach to AI is ethical and participatory. We're not solving problems alone – we're exploring possibilities with people, staying true to our values. We're developing applications with controlled data sources and workflows, ensuring data is used only for intended purposes.
We've been experimenting with Large Language Models and the MindStudio platform. True to our tradition of sharing tools we develop, we're introducing the Evaluation Planner app (currently in beta). The app reduces risk and provides accountability by making its source material explicit.
Draft evaluation plans are informed by the approach described in our recently published Evaluation Handbook. The handbook offers a lean, pragmatic take on evaluation that aims to centre participant feedback on programs and services. The Evaluation Planning app allows users to create a draft plan that applies this approach to their program evaluations. We are open to sharing the entire app with other organizations (contact us for access). For a technical look at our work and how we're addressing security, check out AI Tools Notes & Disclaimers.
We are excited about the potential of AI, but we're approaching it cautiously and ethically. We are looking for partners to co-develop AI solutions. If you're interested in collaborating or learning more, contact us.