Alphabet, Google’s parent company, has quietly revised its artificial intelligence (AI) principles, removing previous commitments to avoiding applications that could cause harm – including the development of weapons and surveillance tools.

In an update to its AI policy, the tech giant dropped language that explicitly ruled out such uses, instead emphasising collaboration between businesses and democratic governments to develop AI that “supports national security.”

In a blog post announcing the changes, Google’s senior vice president James Manyika and Google DeepMind CEO Demis Hassabis defended the move, citing the need for AI principles to evolve alongside the technology.

“AI has moved from a niche research topic in the lab to a technology that is becoming as pervasive as mobile phones and the internet itself,” they wrote.

They argued that as AI becomes a general-purpose technology used across industries, its governance must adapt to the growing complexity of the geopolitical landscape.

The revised guidelines come amid growing debate over AI’s role in defence and surveillance, as well as concerns over how commercial interests influence the direction of AI development.

Google has previously faced employee backlash over AI-related military contracts. In 2018, it declined to renew its controversial Pentagon contract, Project Maven, following internal protests over the potential use of AI in warfare.

Alphabet’s policy update coincided with the release of its latest financial report, which fell short of market expectations, leading to a dip in share price. Despite this, the company remains bullish on AI investment, pledging $75 billion (€60 billion) in AI projects for 2025 – 29 per cent more than Wall Street analysts had projected.

Much of this spending will go towards AI research, infrastructure, and integration into Google’s core services, such as AI-powered search results through its Gemini platform.

While Google’s early motto was “Don’t be evil,” later revised to “Do the right thing” following the company’s restructuring in 2015, its latest AI policy shift signals a more pragmatic approach – one that is likely to spark further debate about the ethical boundaries of AI in an increasingly AI-driven world.

Related

The unpaid civil servants: Local programmers building what the Government hasn’t

October 24, 2025
by Adel Montanaro

Citizen-built tools – from permit and pollution trackers to grant portals – are helping solve everyday frustrations

Melita expands VoLTE roaming to eight countries

October 24, 2025
by BN Writer

The rollout extends Melita’s existing VoLTE service, which is already available across Malta and Gozo

db Group reports turnover of almost €100 million and record profit as it opens bond issue to public investors

October 24, 2025
by BN Writer

This coincides with the launch of a €60 million bond programme to support the Group’s continued expansion