Tackling AI risks: Your reputation is at stake
Risk is all about context
Risk is all about context. In fact, one of the biggest risks is failing to acknowledge or understand your context: That’s why you need to begin there when evaluating risk.
This is particularly important in terms of reputation. Think, for instance, about your customers and their expectations. How might they feel about interacting with an AI chatbot? How damaging might it be to provide them with false or misleading information? Maybe minor customer inconvenience is something you can handle, but what if it has a significant health or financial impact?
Even if implementing AI seems to make sense, there are clearly some downstream reputation risks that need to be considered. We’ve spent years talking about the importance of user experience and being customer-focused: While AI might help us here, it could also undermine those things as well.
There’s a similar question to be asked about your teams. AI may have the capacity to drive efficiency and make people’s work easier, but used in the wrong way it could seriously disrupt existing ways of working. The industry is talking a lot about developer experience recently—it’s something I wrote about for this publication—and the decisions organizations make about AI need to improve the experiences of teams, not undermine them.
In the latest edition of the Thoughtworks Technology Radar—a biannual snapshot of the software industry based on our experiences working with clients around the world—we talk about precisely this point. We call out AI team assistants as one of the most exciting emerging areas in software engineering, but we also note that the focus has to be on enabling teams, not individuals. “You should be looking for ways to create AI team assistants to help create the ‘10x team,’ as opposed to a bunch of siloed AI-assisted 10x engineers,” we say in the latest report.
Failing to heed the working context of your teams could cause significant reputational damage. Some bullish organizations might see this as part and parcel of innovation—it’s not. It’s showing potential employees—particularly highly technical ones—that you don’t really understand or care about the work they do.
Tackling risk through smarter technology implementation
There are lots of tools that can be used to help manage risk. Thoughtworks helped put together the Responsible Technology Playbook, a collection of tools and techniques that organizations can use to make more responsible decisions about technology (not just AI).
However, it’s important to note that managing risks—particularly those around reputation—requires real attention to the specifics of technology implementation. This was particularly clear in work we did with an assortment of Indian civil society organizations, developing a social welfare chatbot that citizens can interact with in their native languages. The risks here were not unlike those discussed earlier: The context in which the chatbot was being used (as support for accessing vital services) meant that inaccurate or “hallucinated” information could stop people from getting the resources they depend on.