Rotterdam school of Management, Erasmus University compact logo

Article: Tuesday, 14 January 2025

Entrepreneurial thinking has a history of transforming industries and reshaping everyday life. When educator and innovator Sir Rowland Hill first proposed stamps  paper ‘covered at the back with a glutinous wash the UK’s Postmaster General Lord Lichfield dismissed it as the most extravagant of all the “wild and visionary schemes”. Yet Hill’s idea was rooted in practicality: people avoided the postal system because it was slow and costly.

By introducing pre-paid postage, Hill aimed to make mail affordable and accessible. Despite initial opposition, the Penny Black stamp featuring Queen Victoria debuted in 1840 in the UK. It quickly became a success. Over 70 million letters were sent that year, with volume tripling within two years. Looking back, the impact was exponential.

Today, start-ups with a similar mindset are transforming sectors from manufacturing to media and communications, customer services, and the arts. The applied AI Institute for Europe estimates that the EU is home to around 6,300 AI start-ups, 10.6 per cent of them focusing on generative AI. This growth and adoption leads to pressing questions: how can the start-up mindset be sustained while ensuring responsible development? Can AI tools genuinely benefit workers?

At the 2024 Reshaping Work Conference, the roundtable session ‘Think like a Start-up’ brought together scholars, government representatives, researchers and industry experts to discuss challenges at the intersection of innovation, work and ethics. The session was led by Lucie-Aimée Kaffee, EU Policy Lead and Applied Researcher at Hugging Face, the start-up providing the most widely used platform for sharing and collaborating on AI models and systems.

Worker transparency: common challenges of AI models

Protecting human interest has become a focal point of regulation when addressing the unique challenges posed by big and small AI models. The former require neural networks that work better with more data, in the case of some LLMs like ChatGPT with over 200 billion parameters per model, making them more expensive to train and run. The latter are fine-tuned for specific tasks or areas with fewer parameters but can rival larger models. Phi-3-mini and its open source equivalent SmolLM, for instance, rival the performance of models 25 times larger.

Transparency is essential for fostering trust and understanding among workers. Without clear insight into how AI models function, workers may become sceptical or distrustful of AI-supported decision-making. The City of Amsterdam officials in charge of innovation ecosystems and the job market suggest that a bottom-up approach has proven crucial in analysing and aligning AI initiatives by sector. Exposure is different when comparing roles heavily dependent on information or within public administration to against those in hospitality or manufacturing. 

Some companies have advocated to include workers in the development process by empowering them to directly experiment with AI tools. The concept of ‘citizen data scientist’, which enables workers to engage hands-on with AI, exemplifies how inclusivity can help demystify AI and build trust. “Citizenship in data science could be the future of a more inclusive workplace,” said a digital design specialist from a software agency at the roundtable session. Allowing workers to test and adapt AI tools not only makes the technology more accessible but also reduces apprehension about its role in the workplace.

But in the end, it’s the human expertise that makes the final product valuable.

Designing for augmentation, not replacement

Designing AI to enhance human roles, not replace them, was the roundtable’s main theme. The consensus around the table leaned toward a model of augmentation in which AI takes on repetitive tasks, allowing workers to focus on creative, strategic, or emotionally driven work. Involving workers directly in AI adoption also enhances job satisfaction and morale by inviting feedback and addressing redundancy. “Depending on their identity, people may be more willing to adapt,” an HR expert shared. “The question is how to design better organisations while remaining conscious of the threats AI poses in the workplace.” This approach ensures that AI aligns with workers' needs, creating a supportive rather than disruptive presence.

Small AI models, developed by start-ups and large companies alike, are for the most part focused on a single task or reduced scope, which allows for faster testing, iteration, and refinement. The benefits of some of these models have been improved productivity and streamlining processes. Yet even small AI models should prioritise a responsible design, such as privacy by default. “If a model can understand your [communications] history with a coworker and draft an email response, it’s a step forward,” a customer service expert noted. “But in the end, it’s the human expertise that makes the final product valuable.”

Big AI models face challenges of scale and oversight. “Europe is already seen as too  bureaucratic, even if start-ups are not highly disruptive,” a portfolio manager argued. “Start-ups shouldn’t be over-regulated; let’s focus on bigger fish while fostering grassroots innovation.” Some participants agreed that this differentiation should come to the fore in norm setting or regulatory enforcement, especially with frameworks like the EU AI Act at the national level.

Aligning AI models with needs in the workforce

Start-ups are powered by an entrepreneurial mindset that provides lessons in adaptability, experimentation and scaling. Just as Hill’s postage stamp transformed communications by making postal delivery more accessible and reliable, AI development is reshaping the core foundations of value generation. So, how can organisations maintain the start-up spirit of innovation while navigating the complexities of responsible AI? 

For big AI, rigorous oversight and transparency are essential, given these models’ potential impacts across industries and in broad societal areas such as education, neuroscience, healthcare, and public safety. While small AI models are generally lower risk, they should integrate human-centred approaches to maintain trust and inclusivity.

“Automation is a long-term investment. High frequency tasks will be targeted. Fraud detection, for example, is often [done] using machine learning,” an AI consultant suggested. “The attitude is that we should be scared but excited, being realistic about models’ use and asking organisations about their trust and culture.” Workers remain central to this transformation. To enhance productivity without eroding trust, clear guidelines are needed about the deployment and impact of AI models in most areas. This includes transparent communication, promoting augmentation, and supporting upskilling opportunities to align AI with human needs in the workforce.

Lucie-Aimée Kaffee

EU Policy Lead and Applied Researcher at Hugging Face

Franco Antonio Bastida

Writer and global policy expert based in Berlin, and Reshaping Work Fellow

RSM Discovery

Want to elevate your business to the next level using the latest research? RSM Discovery is your online research platform. Read the latest insights from the best researchers in the field of business. You can also subscribe to the newsletter to receive a bimonthly highlight with the most popular articles.
Your contact for more information:
Danielle Baan

Science Communication and Media Officer

Portrait of Erika Harriford-McLaren
Erika Harriford-McLaren

Corporate Communications & PR Manager

Erasmus University campus in autumn, showcasing its iconic red trees, viewed from across the campus pool.