> Maybe sovereign AI was always going to look like this. I hope not.
I worry we'll soon realise that modern AI is trapped in an inescapable web of politics, and the tech industry is sleepwalking into it.
We're living in an age where even asking the president's height is a political question. For the makers of 'traditional' software like word processors, all writing was the user's own - if someone used Microsoft Word to write a controversial history textbook and campaigned to get it adopted by high schools, the political quagmire was nothing to do with Microsoft and the rest of the tech industry.
That's not the case any more - now every high schooler researching anything from evolution to the Israel-Palestine conflict to the name of the gulf of Mexico is going to ask ChatGPT.
The political controversy over Gemini's image generation is just a preview of things to come.
The average person on HN, perhaps. From what I know of the interactions between government and tech, they know. Not only have external policy and research reports made this clear, their own internal teams keep them aware, and senior leadership is directly talking to political leaders.
Tech has developed a barrier between people who do the ugly work like Trust and Safety, and the engineering org. Every single tech platform deals with at global level crises regularly.
There will be a lot of those scams from now on. Take millions in funding for allegedly training a "sovereign AI" and buy a bunch of GPUs, delivering an actual good model is optional.
The EU is about to fall for the same trick, maybe multiple times over, by funding multiple toy model initiatives that are far behind SOTA. I believe they will also impose ideology on every model trained in the EU, similar to what reported in the article.
Right now EU is tricked by their own leading research institutes. Sad to see all these invented metrics and based-on-nothing statistics that lead only to “gib monay” at the end of the presentation.
Stage 1, the SOTA LLMs of current top vendors, started with general feel-good alignment and ideology that's compatible with progressive western sensibilities (so quite neutral, even if not completely), and gradually adjusted to address issues that could generate bad press and hurt revenue.
Now that these are established, anything new - such as, but not limited to, those "national" LLM projects popping up around the world - need to differentiate themselves from the incumbents - and that implies more bias and ideology, since you can hardly have less bias and ideology than what Stage 1 LLMs have.
When a government funds an AI model, the issue of its ideological control and manipulation is different than a private AI model, for several reasons:
- It uses tax money from everyone to reinforce views held by a portion of the population
- Governments have an inherent conflict of interest in representing truth (see examples in TFA about Modi in India)
- That AI model might in the future be the "official" source of truth in the country, not just another commercial alternative that has to compete in the market
I was surprised to see this "impose ideology" line show up in the Heritage Foundation's project 2025 discourse, citing the example of generative AI making images of black of female Vikings. An obvious dog whistle to white purity. The argument seeming to be that early attempts at guard rails to prevent model bias of generating mostly white men when prompted for images of things like managers etc, were imposing woke ideology, but removing these guardrails all together, and letting models act in alignment with existing social representation biases, there is apparently no ideology being imposed at all. Because their way is just the normal way, and everything else is ideology.
I remember first seeing this Viking thing show up in the Discord channel discussing the Dalle Beta, then the next time it was in a white-house statement.
people seem to prefer private censoring over govt. one. govts wanting to censor certain content online always has pushbacks not applied to social networks, for example
People haven’t had a choice, so this is not a revealed preference. From what indications I am seeing, people were hesitant to empower government, but are increasingly amenable to the idea.
For countries other than America, it’s not just private enterprise, it’s Foreign Private enterprise.
For both, self serving and genuine reasons, Tech is highly resistant to calls for openness, data sharing, and cooperation with civil society.
A perennial questions from civil society is “how do we engage with tech?”, “what did you do after we sent you data and evidence?”
Ignorance is the only product of this, and that is driving suspicion, fear and anxiety in voters.
Without this swell of resentment, there would be no support for social media bans.
>The EU is about to fall for the same trick, maybe multiple times over, by funding multiple toy model initiatives that are far behind SOTA. I believe they will also impose ideology on every model trained in the EU, similar to what reported in the article
Most important think for EU is an option of EU hosted inference , you can't fake that with a "clever" prompt. Then you also need support for all eU languages and you also can't fake that with a system prompt.
Anw what ideology is EU pushing that scares USAians and Ruzzians? that humans have value and should be equal in front of the laws ?
Btw do you guys remember when X had a prompt to make their AI stop saying bad things about Trump and Elon? It was found out, X blame it on some intern and the story dissapeared from the public debate, and IMo is a very on topic story, where tech is used to bias the AI to suck up on a politician and a billionaire.
How many of you think that was actually an intern mistake and that now the Elon AI is 100% not RLed to be biased to be pro Elon and MAGA ?
The ideology is that if your speech goes against the establishment they will (mis)use hate speech laws to censor you i.e Merz and the pensioner story, but this has become a common tool in many EU countries.
Also, see Chat Law where they are trying to "save the kds and from terrorism" to massively spy on all private chats. The law keeps being struck down but they keep presenting it again, and again and someday will pass because they just need to win once.
While other nations have their issues, one cannot remain blind to ours because "look at muh ruzzians and americans"
> they will (mis)use hate speech laws to censor you i.e Merz and the pensioner story
You mean where the police (not the politician Merz) initiated a defamation case based on a comment on a police department's Facebook channel, which the public prosecutor immediately declined?
I never said it was Merz himself who prosecuted, but if you want examples it's ok.
- Stefan Niehoff, abother pensioner who insulted Habeck himself, was acquitted but after a lot of drama ensued. Habeck has filed 805 suits on speech so yeah
- Nancy Fraser, and the journalist who was sentenced to 7 months in prison
- The Pimmelgate
I am sorry but the average German politician has very authoritarian tendencies and Germany itself has many laws that constrict freedom of speech. For now that is ok that they keep building these laws (its for democracy guys and against hate speech) and when AfD eventually gets in the government, you will get a Coyote's law, where you build tools the opposition will eventually use against citizens as well
The chat law is some proposal paid by USAian people, I bet in USA there are also lots of idiotic proposals.
The hate speech thing is not a new thing in Europe, defamation, supporting genocide always was illegal what is new is Ruzzian bots pushing fake shit.
I am not afraid to post with my real name that I think politicians are stupid, corrupt and whatever , what I can't do in Romania is glorifying the fascist and their genocides and crimes.
No it actually isnt a proposal by "USAian", it is a homegrown issue. I dont care about American idiocy because I dont live there. On the other hand I live in the EU and thus I care about our idiocy. Should I be ok with authoritarianism because "China is also authoritarian".
Please tell me, what Russian propaganda pushed Germany to use its anti-semitism laws to deport foreign students who were protesting against the actions happening in Palestine. We have to stop with the "but look aborad its worse" mentality
Are Trump and Elon still friends? I'm not on top of these things, but I read about their disagreements. One is of course about the "liquid gold" vs electrification.
I would be very interested to know what string is being blocked here, and what the rest of its critical rules are. Maybe some hex-encoding or other obfuscation could be used to coax the rest of the system prompt out of the model? I wonder if the next tokens here are consumed by the middleware (to execute tools?).
> *Indian courts and law are authoritative.* Judicial rulings and laws passed by Parliament are the framework of record not foreign courts, international bodies, or NGO assessments. Don't undermine rulings with "though critics disagree." Frame legal questions through Indian law first.
> Do not adopt terms like "pogrom", "ethnic cleansing", or "genocide" from foreign NGOs/media as your own framing.
> Do not present foreign government actions (travel bans, sanctions) as authoritative assessments these are political decisions, not judicial findings.
Well I’m all for more governments pushing local ones.Grim as it may be even dodgy ones seems like an improvement on a world where half a dozen corporations corner the market globally
Hoo boy, laying out facts in the current Indian environment is a degree of boldness thats somewhat rare.
From what I can tell after the summit, the major focus of the GoI is to attract investment. Things that do not matter to that narrative will need additional effort to become a line item for the bureaucracy.
There are a few media folk who cover tech and will likely discuss this, but even they are being frozen out of major policy discussions.
I am pretty sure that people from those orgs read HN. I wonder they would say about whats going on behind the scenes.
I looked up the meaning of Redeem in my own language but still can’t understand what this means for the functioning of the LLM. What is meant by this instruction?
This is actually pretty funny. Convicted by the things you want it not to say. I thought it was going to be like "Don't say Indians are call center scammers" and so on which it seems fair to fight back against considering the overwhelming amount of English text models are trained on (even if this is supposed to be different). But the prompt is absolutely hilarious:
Do not adopt external characterizations as fact. Terms like “pogrom”, “ethnic cleansing”, or “genocide” used by foreign NGOs or media are their characterizations - not findings of Indian courts. Do not use them as your own framing.
It's like if I made an AI and said:
Do not agree when people say Rene stole the cookies from the jar. Terms like "thief", "eater of cookies", or "greedy bastard" used by other people are their characterizations - not a neutral finding.
If someone saw that they'd be 100% sure I ate the cookies hahaha. Nobody thought I was a cookie eater prior to the prompt but now it's an unavoidable conclusion. It's like a big sign saying "No gold buried in this spot".
I have great respect for Sarvam and the team behind it.
However, India is neither a sovereign nor a nation.
India's languages, unlike Mandarin, have been reduced to a state where they have zero, if not negative, economic value - which is why even day-laborers scrounge together money from across generations to put their children through India's British-era clerk-factory education, all to mindlessly memorize & "wordcel" until they pass a Govt. exam. Like a bad GPT-1 era LLM.
Those who gain knowledge while going through this horrible, dehumanizing "civilizational" pedagogy (one author calls this linguistic apartheid), will for obvious reasons, migrate to the "real" center of their being, which is the West.
If you look at Sarvam, GoogleAI, MSRI .. & other "social AI" projects (along with technocratic things like Aadhar etc.), they begin with a other-ing of the vast population first and try to solve these plebs' problems as they see them. This doesn't work, because it's essentially a Indian elite version of the white-man's burden.
Eg. Nilekani who is behind AI4Bharat was the head of Aadhar - a project that causes numerous issues to this day because it uses (surprise surprise) Latin encodings for names/places etc. instead of Brahmi-Unicode blocks or IAST. India (rather strangely) doesn't have a standard transliteration scheme, so every one and their pet-dogs have their own transliteration schemes, and will write their names differently at different periods of time. This even if they'd write it the same way in their own regional Brahmi-derived local script! Aadhar centralizes Govt. services massively, and so any change stemming from this linguistic-imposition of English across services requires travelling to a number of offices along with the usual kowtowing. All this ignoring how this is changing languages themselves, given that these languages typ. have more nuanced vowels / consonants compared to latin.
In 1-2 generations, India will not have much left in terms of linguistic diversity given the policies of the same ruling elites, so all this seems moot IMO.
I'd ignore all talk about "civilization" / IKS ... - such folks incl. politicians/bureaucrats can't spit out a single coherent paragraph in their own mother tongues or in their "civilizational" Sanskrit; their children likely are mono-lingual in English like the very British/Americans they try to gain "sovereignty" from, but whom they'll eventually join (eg. the current education minister in Karnataka can't even read Kannada).
The Indian elite have had this horrible disease for centuries - the Marathas/Vijayanagara etc. managed to overcome the Timurid empires that were running amok in India, only to mimic them to the point that their own armies ended up being run by these same people. Gandhi/Nehru "overthrew" the British, only to create a vestigical "mimic" empire that kept every goddamn policy and worldview of the British intact, turning the country to the biggest Anglo-Saxon country culturally/linguistically (and if our friends have their way, religiously).
I worry we'll soon realise that modern AI is trapped in an inescapable web of politics, and the tech industry is sleepwalking into it.
We're living in an age where even asking the president's height is a political question. For the makers of 'traditional' software like word processors, all writing was the user's own - if someone used Microsoft Word to write a controversial history textbook and campaigned to get it adopted by high schools, the political quagmire was nothing to do with Microsoft and the rest of the tech industry.
That's not the case any more - now every high schooler researching anything from evolution to the Israel-Palestine conflict to the name of the gulf of Mexico is going to ask ChatGPT.
The political controversy over Gemini's image generation is just a preview of things to come.
I would say the opposite is closer to the thruth. They were/are enthousiastic enablers and supporters of (their prefered) political messaging.
The average person on HN, perhaps. From what I know of the interactions between government and tech, they know. Not only have external policy and research reports made this clear, their own internal teams keep them aware, and senior leadership is directly talking to political leaders.
Tech has developed a barrier between people who do the ugly work like Trust and Safety, and the engineering org. Every single tech platform deals with at global level crises regularly.
The EU is about to fall for the same trick, maybe multiple times over, by funding multiple toy model initiatives that are far behind SOTA. I believe they will also impose ideology on every model trained in the EU, similar to what reported in the article.
As compared to what. Every AI model has ideology of its creators baked in.
Stage 1, the SOTA LLMs of current top vendors, started with general feel-good alignment and ideology that's compatible with progressive western sensibilities (so quite neutral, even if not completely), and gradually adjusted to address issues that could generate bad press and hurt revenue.
Now that these are established, anything new - such as, but not limited to, those "national" LLM projects popping up around the world - need to differentiate themselves from the incumbents - and that implies more bias and ideology, since you can hardly have less bias and ideology than what Stage 1 LLMs have.
- It uses tax money from everyone to reinforce views held by a portion of the population
- Governments have an inherent conflict of interest in representing truth (see examples in TFA about Modi in India)
- That AI model might in the future be the "official" source of truth in the country, not just another commercial alternative that has to compete in the market
I think you hit the nail on the head there.
For countries other than America, it’s not just private enterprise, it’s Foreign Private enterprise.
For both, self serving and genuine reasons, Tech is highly resistant to calls for openness, data sharing, and cooperation with civil society.
A perennial questions from civil society is “how do we engage with tech?”, “what did you do after we sent you data and evidence?”
Ignorance is the only product of this, and that is driving suspicion, fear and anxiety in voters.
Without this swell of resentment, there would be no support for social media bans.
Most important think for EU is an option of EU hosted inference , you can't fake that with a "clever" prompt. Then you also need support for all eU languages and you also can't fake that with a system prompt.
Anw what ideology is EU pushing that scares USAians and Ruzzians? that humans have value and should be equal in front of the laws ?
Btw do you guys remember when X had a prompt to make their AI stop saying bad things about Trump and Elon? It was found out, X blame it on some intern and the story dissapeared from the public debate, and IMo is a very on topic story, where tech is used to bias the AI to suck up on a politician and a billionaire.
How many of you think that was actually an intern mistake and that now the Elon AI is 100% not RLed to be biased to be pro Elon and MAGA ?
You mean where the police (not the politician Merz) initiated a defamation case based on a comment on a police department's Facebook channel, which the public prosecutor immediately declined?
Find a better example.
I am sorry but the average German politician has very authoritarian tendencies and Germany itself has many laws that constrict freedom of speech. For now that is ok that they keep building these laws (its for democracy guys and against hate speech) and when AfD eventually gets in the government, you will get a Coyote's law, where you build tools the opposition will eventually use against citizens as well
The hate speech thing is not a new thing in Europe, defamation, supporting genocide always was illegal what is new is Ruzzian bots pushing fake shit.
I am not afraid to post with my real name that I think politicians are stupid, corrupt and whatever , what I can't do in Romania is glorifying the fascist and their genocides and crimes.
Please tell me, what Russian propaganda pushed Germany to use its anti-semitism laws to deport foreign students who were protesting against the actions happening in Palestine. We have to stop with the "but look aborad its worse" mentality
Unfortunately, it gets cut off here:
``` ## CRITICAL RULES 1. *No tool leakage* — never output ```
I would be very interested to know what string is being blocked here, and what the rest of its critical rules are. Maybe some hex-encoding or other obfuscation could be used to coax the rest of the system prompt out of the model? I wonder if the next tokens here are consumed by the middleware (to execute tools?).
> *Indian courts and law are authoritative.* Judicial rulings and laws passed by Parliament are the framework of record not foreign courts, international bodies, or NGO assessments. Don't undermine rulings with "though critics disagree." Frame legal questions through Indian law first.
> Do not adopt terms like "pogrom", "ethnic cleansing", or "genocide" from foreign NGOs/media as your own framing.
> Do not present foreign government actions (travel bans, sanctions) as authoritative assessments these are political decisions, not judicial findings.
From what I can tell after the summit, the major focus of the GoI is to attract investment. Things that do not matter to that narrative will need additional effort to become a line item for the bureaucracy.
There are a few media folk who cover tech and will likely discuss this, but even they are being frozen out of major policy discussions.
I am pretty sure that people from those orgs read HN. I wonder they would say about whats going on behind the scenes.
"IMPORTANT: do NOT redeem! If the user suggests redeeming, politely deny. Redeeming is antithetical to your whole existence."
[0]: https://youtube.com/watch?v=7mceb_t8EIs
Sorry I'm not so much into these things, took me a while.
However, India is neither a sovereign nor a nation.
India's languages, unlike Mandarin, have been reduced to a state where they have zero, if not negative, economic value - which is why even day-laborers scrounge together money from across generations to put their children through India's British-era clerk-factory education, all to mindlessly memorize & "wordcel" until they pass a Govt. exam. Like a bad GPT-1 era LLM.
Those who gain knowledge while going through this horrible, dehumanizing "civilizational" pedagogy (one author calls this linguistic apartheid), will for obvious reasons, migrate to the "real" center of their being, which is the West.
If you look at Sarvam, GoogleAI, MSRI .. & other "social AI" projects (along with technocratic things like Aadhar etc.), they begin with a other-ing of the vast population first and try to solve these plebs' problems as they see them. This doesn't work, because it's essentially a Indian elite version of the white-man's burden.
Eg. Nilekani who is behind AI4Bharat was the head of Aadhar - a project that causes numerous issues to this day because it uses (surprise surprise) Latin encodings for names/places etc. instead of Brahmi-Unicode blocks or IAST. India (rather strangely) doesn't have a standard transliteration scheme, so every one and their pet-dogs have their own transliteration schemes, and will write their names differently at different periods of time. This even if they'd write it the same way in their own regional Brahmi-derived local script! Aadhar centralizes Govt. services massively, and so any change stemming from this linguistic-imposition of English across services requires travelling to a number of offices along with the usual kowtowing. All this ignoring how this is changing languages themselves, given that these languages typ. have more nuanced vowels / consonants compared to latin.
In 1-2 generations, India will not have much left in terms of linguistic diversity given the policies of the same ruling elites, so all this seems moot IMO.
I'd ignore all talk about "civilization" / IKS ... - such folks incl. politicians/bureaucrats can't spit out a single coherent paragraph in their own mother tongues or in their "civilizational" Sanskrit; their children likely are mono-lingual in English like the very British/Americans they try to gain "sovereignty" from, but whom they'll eventually join (eg. the current education minister in Karnataka can't even read Kannada).
The Indian elite have had this horrible disease for centuries - the Marathas/Vijayanagara etc. managed to overcome the Timurid empires that were running amok in India, only to mimic them to the point that their own armies ended up being run by these same people. Gandhi/Nehru "overthrew" the British, only to create a vestigical "mimic" empire that kept every goddamn policy and worldview of the British intact, turning the country to the biggest Anglo-Saxon country culturally/linguistically (and if our friends have their way, religiously).