A better approach for building ethical AI in light of DeepSeek news

A democratic, transparent, and open approach to benefit humanity

The global race to develop advanced artificial intelligence (AI) systems has largely been dominated by the United States, particularly Silicon Valley enterprises, where innovation is driven by private companies, venture capital, and proprietary technology.  One such company, OpenAI, is described as having a strategy built around monopolising AI access, and selling it as a premium service to those who could afford it.

Recently, however, a Chinese AI company has launched DeepSeek, wiping out that vision. Although it was developed at much lower costs, to the tune of millions of dollars rather than billions, it is seen as a model that is just as effective, and maybe more efficient, than the dominant ChatGPT of OpenAI.

Analysts have said that, because DeepSeek is a genuinely open source model, it has democratised access to AI, allowing developers and researchers worldwide to improve on its code freely with no fee for use and download. The production of ground-breaking tools like DeepSeek has opened up the debate not only on digital sovereignty but also on the question of ending monopolistic proprietary power, and perhaps returning power back to labour.

There have long been concerns raised about monopolistic practices, ethical lapses, and the potential for AI to be misused. The people of Europe, with our strong tradition of pursuing democracy, transparency, and social well-being, have an opportunity to take a different path. By fostering democratic workplaces, ensuring full transparency, and making intellectual property (IP) openly available, we could be part of the development of AI tools that build on and improve on what DeepSeek has achieved, while ensuring that the technology remains a force for good, rather than a tool for control.

Democratic workplaces: Empowering people for ethical innovation

A democratic workplace is one where employees at all levels have a say in decision-making, fostering a culture of collaboration, accountability, and innovation. In the context of AI development, this approach could ensure that diverse perspectives are included, reducing the risk of bias and ensuring that the technology serves the broader public interest.

For example, European AI initiatives could involve not only engineers and data scientists but also ethicists, social scientists, and representatives from civil society. This inclusive approach would help align AI development with societal values, ensuring that the technology is designed to benefit humanity rather than exploit or control it. In contrast, Silicon Valley’s top-down, profit-driven model often prioritises speed and market dominance over ethical considerations.

Full transparency: Building trust and accountability

Transparency is essential for building public trust in AI systems. By making the development process, data sources, and algorithms fully transparent, European AI projects could demonstrate a commitment to accountability and ethical responsibility. This stands in stark contrast to the opaque practices of many Silicon Valley companies, where proprietary algorithms and data practices are often hidden from public scrutiny.

Transparency would also enable independent oversight, allowing researchers, policymakers, and the public to identify and address potential risks, such as bias, privacy violations, or unintended consequences. This open approach would position Europe as an effective global participant in ethical AI, attracting talent and investment from those who value responsible innovation.

Open Intellectual Property: Fostering collaboration and accessibility

Making all intellectual property openly available could revolutionise AI development in Europe. By adopting open-source principles, European AI projects could benefit from the collective intelligence of a global community, accelerating innovation and ensuring that the technology is accessible to all everywhere. This contrasts with the proprietary model dominant in Silicon Valley, where IP is tightly controlled to maintain competitive advantage.

Open IP would also lower barriers to entry, enabling startups, academic institutions, and even individuals to build on existing work. This collaborative ecosystem could foster rapid advancements in AI, allowing Europe to complement the work of established players like DeepSeek. Creating genuinely open IP aligns with the European population’s quest for technological sovereignty thus helping to ensure that AI technologies are adaptable to local needs and not controlled by a handful of powerful corporations.

Ensuring AI benefits society and avoids control

It is crucial to ensure that the resulting AI systems are developed for and remain a tool that benefits society as a whole. Here are some key strategies to achieve this:

Strong ethical frameworks and regulation

Europe must establish robust ethical frameworks and regulations to guide AI development and deployment in order to ensure that AI systems are designed to enhance human well-being rather than control or exploit individuals. The EU’s AI Act, promises to provide regulation for ‘a high level of protection to people’s health, safety and fundamental rights, and to promote the adoption of human-centric, trustworthy AI’. It is welcome that it has classifications of AI systems based on their risk levels and imposing requirements for unacceptable risk, high-risk  specific transparency risk and minimal risk applications. But it could be made to point more in the right direction by adopting frameworks that prioritise human rights, privacy, and fairness.

Public participation and oversight

To ensure that AI serves the public interest, European AI projects should involve public participation and oversight. This could include citisen assemblies, public consultations, and independent review boards to assess the societal impact of AI systems. By giving the public a voice in AI development, Europe can ensure that the technology reflects the values and needs of society as a whole.

Decentralised governance

Decentralised governance models, such as those used in open-source communities, can help prevent the concentration of power and ensure that AI systems remain accountable to the public. By distributing decision-making authority across multiple stakeholders, Europe can reduce the risk of AI being used as a tool for control by any single entity.

Focus on public goods

European AI initiatives should prioritise applications that serve the public good, such as healthcare, education, and environmental sustainability. By focusing on areas where AI can have a positive societal impact, Europe can ensure that the technology remains a tool for empowerment rather than control.

Global collaboration

Finally, Europe should collaborate with other nations and international organisations to promote ethical AI development globally. By sharing knowledge, resources, and best practices, Europe can help create a global ecosystem of AI innovation that prioritises human well-being over profit or power.

Contrasting with the Silicon Valley model

The Silicon Valley model is characterised by a focus on proprietary technology, rapid scaling, and a winner-takes-all mentality. While this approach has driven remarkable innovation, it has also led to significant challenges, including monopolistic practices, ethical concerns, and a lack of accountability. For example, the closed nature of many AI systems makes it difficult to understand how decisions are made, raising concerns about fairness and bias.

In contrast, DiEM25’s Green Jobs proposed model of democratic workplaces, would better facilitate transparency, and open IP to offer a more inclusive and sustainable approach to AI development. By prioritising collaboration, accountability, and accessibility, Europe could create AI systems that are not only technologically advanced but also aligned with societal values.

Conclusion

Europe has a unique opportunity to redefine the future of AI by genuinely embracing values of democracy, transparency, and openness. By fostering democratic workplaces, ensuring full transparency, and making intellectual property openly available, A progressively democratic Europe, free from internal or external oligarchic control, could develop open-source AI tools that build on the advances offered by DeepSeek and other leading systems. Moreover, by establishing strong ethical frameworks, involving the public, and prioritising the public good, Europe can ensure that AI remains a tool that benefits society as a whole and does not become a technology used to control humanity. In contrast to the Silicon Valley model, Europe’s path could demonstrate that the most advanced technologies can also be the most inclusive.

 

Do you want to be informed of DiEM25's actions? Sign up here

Feminism without borders: Why women’s liberation must be a transnational struggle

On International Women’s Day, we celebrate the achievements of women across the world, but we must also recognise the ongoing struggles for ...

Read more

WTF Happened to Europe? DiEM25’s rallying cry for a different Europe

The event at the National Theatre of Brussels served as a rallying cry against the current trajectory of the European Union

Read more

Yanis Varoufakis: Why the EU is failing and what must be done

Yanis Varoufakis' full speech at DiEM25's event in Brussels titled 'WTF Happened to Europe?'

Read more

EU summit fuels war over peace: Our demand for a diplomatic solution for Ukraine

As EU leaders push for more military spending and escalation in Ukraine, we remain adamant on our plan for peace

Read more