Our AI Journey
A couple of years ago we put the brakes on the use of AI in our business with a strict no use policy. Some of our clients were making it explicit that AI could not be used in the delivery of projects, and we were concerned not least from a GDPR, data protection perspective on the potential for data breaches. This then led to greater concern about its potential to undermine human critical process and thinking, create false results, have inherent bias, and be used in unethical ways.
As we thought more about AI and its potential for both benefit and harm, we took a more holistic approach in our deliberation and development of our AI strategy.
These founding principles set the framework for the strategy:
- How we use AI in the business
- How we use it in research, evaluation, consultancy, and grant making
- The decisions we make about who we work with and on
- Whether using AI creates any detriment to people, communities, economies, and aspirations for young people.
We are not alone.
When we started to talk to clients, friends, partners, we were not alone in our concerns, yet adoption was happening at pace and insidiously through models like Chat GPT. We were seeing more and more content that was clearly written by a GPT.
As a business committed to tackling inequalities, the potential harm that AI could have on the labour market was becoming very obvious to us. Seeing adverts for AI employees who could replace many foundational economy jobs felt scary, especially in a labour market where people with protected characteristics were likely to face the greatest detriment. The devastation that AI could cause in hollowing out local economies to us is a real issue we cannot ignore. The potential of widening inequalities is very real.
We also had to consider the consequence on our business and those of our friends and partners. The quick fix low-cost opportunity of creating marketing materials, videos and content has had such an effect on the freelance and creative economy that we needed to understand what our contribution to mitigate that could be.
Making a stance not to contribute to this effect is important to us so we made the decision to not replace what we would have asked a human to do with AI. In practice this means supporting freelancers, creatives and suppliers aligned to our values. This might have implications on our competitiveness, but it creates a false economy to look at costs on paper when not assessing the potential to contribute positively to generate social value and improve equity.
Getting my nose under the bonnet
Following great thought leadership on the issue from people like Rachel Coldicutt and similar businesses who are also making a stance like SUPERRR Lab, I took the plunge and enrolled on AI Strategy for Business course with the London Business School.
As a completely non-technical person it was daunting, but it was extremely helpful and pragmatic in that we had ten weeks to consider an AI project/product and its potential or not of application in the business.
The key things I learnt were.
- Understanding the terminology and how it all works without having to be a technical person: seeing under the bonnet of a car engine, and ‘knowing what bit does what’ without being a mechanic.
- The ethical and economic implications of adoption and considerations of when and how to use it. Attending the course at the same time as the launch of our strategy was reassuring.
- The difference between weak and strong AI and that despite all the hype AI is still weak, and more than 80% of AI businesses fail. We must exercise caution and beware of the FOMO effect; the Fear of Missing Out is not a call to action.
- AI like the technical advancements since the internet is here to stay. How we choose to use it and work with it is up to us.
Standing strong
AI has enormous potential for good and bad. I have seen the opportunity of using machine learning more effectively to understand the impact of early intervention and prevention beyond cost benefit analyses. Its ability to process open data and proprietary data ETHICALLY could be transformational. But it is still early days.
That is why we have made the stance as a social purpose business working to tackle inequalities that we will always be:
- Human led.
- Human informed
- Human first
We have shared our AI Policy openly to contribute to the debate and inspire others and plan to share more as we learn about its application and potential over the coming months and years. If you would like to find out more about our journey and how we might be able to help you navigate the AI challenge please get in touch.
A blog written by Caroline Masundire, Managing Director and Owner of Rocket Science