Wouldn’t it be nice if your new system for generative artificial intelligence (GenAI) came as an easy-to-assemble flatpack like something from IKEA? GenAI is artificial intelligence that can make text, images, or videos from input data. It would be great if it came with all the parts and instructions you need, designed in a way that ensures your success.
We are not there yet, says Alexander Galt, Digital Ethics Leader at IKEA. These days, most companies are still wrestling with how to incorporate such a powerful technology into their operations. How do you discover the ways in which it might be most helpful? How do you make sure you are taking advantage of the opportunities without risking the exposure of your company’s data or intellectual property, or inadvertently copying somebody else’s?
Our company is no exception. At Inter IKEA, the range, supply and concept group of the global home furnishings brand, we are exploring how we can use these amazing new tools. We are confident, however, that we have laid a solid foundation in our learning and experimentation journey, one that fits our culture, and a model that could potentially benefit other companies as well.
When Open AI first brought generative AI into the mainstream late in 2022, most companies faced a surprisingly difficult question: why and how should we start using GenAI? It presented a real dilemma. Do we prohibit it altogether because we have so many questions around the tool, or are we just kidding ourselves, burying our heads in the sand because our co-workers are going to be using it anyway? If we do restrict it, aren’t we going to miss out on an important opportunity to understand the potential benefits?
Many of our colleagues found these questions interesting, but for my colleague Dijana Aleksić and I, it went deeper. Dijana works with employee data at the Inter IKEA Group and studies employee experience with AI as a part-time PhD at RSM. We both agreed that there are many essential components when it comes to security and expectations on the users to set us in the right direction with Gen AI.
First, we wanted to make sure our data was secure, which is why we set up a secure AI ‘sandbox’, a secure environment in which to experiment with generative AI.
Second, we worked through some ethical considerations, as well as setting the expectations, for how GenAI should and should not be used. We concluded that if you need factual output, generative AI is not for you. If you’re doing image generation and you need the intellectual property rights of the output, generative AI isn’t for you. If you want to do code generation and you don’t have a way to check the extent of security vulnerabilities that may be introduced, generative AI isn’t for you.
At the same time, we wanted to make sure nobody felt they were under any pressure to create a workable product just yet. From the beginning, we wanted the focus to be more on experimentation and sharing rather than pursuing a specific application. Our hope was to give people a chance to get to know the technology, think about their business needs and undertake experiments, all without worrying too much about the success of any particular project.
No one team was in charge, though initiation came from a small team of AI enthusiasts who led the effort to facilitate the learning process They launched a virtual community of practice with and for a few GenAI enthusiasts – this was an open community that extended beyond the technology departments to include other business departments that could use the AI capabilities such as HR, Learning Development, and the comms team. It was opened it up to anyone interested in learning more about this technology. Six months later, this has grown to become a diverse group of enthusiasts, with nearly 600 Inter IKEA co-workers from a variety of functional areas, eager to share their knowledge and learn together.
Our community of practice is decentralised, and while senior leadership is supportive of our work, we don’t have a specific mandate beyond facilitating learning, sharing what experiments are going on, and following the evolution of the technology. Of course, use case experiments include the business and technology teams from their respective business units.
Today, the experiments continue. As far as the company is concerned, GenAI is still very much a work in progress. However, our experience of working with GenAI in the community of practice has already taught us six important lessons:
Experimenting with technology in accordance with your firm’s values is a good way to develop the technology in a way that is both consistent with your culture and useful to your company.
It is important to set strong guard rails in place before you start your experiment. This is the best way to minimise the risk that your data might be compromised.
Cross-functional discussions are extremely useful, especially when it comes to evaluating the potential of a major new technology such as GenAI.
There is still a lively debate about what GenAI is and how to use it. So far, our biggest takeaway is that GenAI isn’t an oracle. It can’t give you the answer. However, you can use it to trigger a conversation and relevant business questions that will help you with looking at it from a different angle. Some of our best learning moments with GenAI have been when we use it as a tool to support our own creativity, and not take it too seriously.
We are still a long way from having a product that can easily scale up. In many ways, the benefits of GenAI are likely to stay intangible for some time yet, until longer-term internal data sets and test results become available. We are too early in the cycle to see the benefits and to have concrete examples of how it can make a difference.
It’s important to suspend judgement about the potential of these tools. Critics question the value and quality of Gen AI's creative output. GenAI holds the potential to be a facilitator of creative thought processes to produce something novel, but organisations using it only for cheaper and faster deliveries may miss out on the opportunities it holds.
Ultimately, our biggest lesson was the confirmation that our culture stands the test of even the most innovative technologies, and this reaffirmed our conviction that innovation doesn’t need to happen in a department called innovation. In a creative organisation, innovation can be created by anyone committed to looking for ways to doing things better. Our community of practice is turning out to be an excellent way to trigger new ideas and learn from each other’s experiences, fuelled by curiosity about how this new technology can provide, in the words of IKEA’s own vision statement, “…a better everyday life for the many people.”
Science Communication and Media Officer
Corporate Communications & PR Manager
Rotterdam School of Management, Erasmus University (RSM) is one of Europe’s top-ranked business schools. RSM provides ground-breaking research and education furthering excellence in all aspects of management and is based in the international port city of Rotterdam – a vital nexus of business, logistics and trade. RSM’s primary focus is on developing business leaders with international careers who can become a force for positive change by carrying their innovative mindset into a sustainable future. Our first-class range of bachelor, master, MBA, PhD and executive programmes encourage them to become to become critical, creative, caring and collaborative thinkers and doers.