How to talk to your family about AI
6 comments
The UK government has decided to spend a fortune on promoting AI to the masses, and they are not exactly impartial.

If you have family or friends adopting "AI" tools, make sure they do not fully swallow the koolaid. Pro-gen-AI propoganda is going into hyperdrive as the promised return on the huge investments are failing to materialise.
You might have seen the flop sweaty pleading from the over-invested AI bigwigs. They need more people to use their tools more often if we are going to avoid a catastrophic bubble bursting.
Speaking at the World Economic Forum at Davos, Switzerland on Tuesday, Nadella pontificated about what would constitute such a speculative bubble, and said that the long-term success of AI tech hinges on it being used across a broad range of industries - as well as seeing an uptick in adoption in the developing world where it's not as popular, the Financial Times reports. If AI fails, in other words, it's everyone else's fault for not using it.
The thing is, AI up to now has been largely "meh" for most people. Yeah, some people find it so fascinating and essential that they literally want to marry their virtual fren, but study after study has found limited to no productivity boost for general business tasks.
This is a problem when much of the GDP growth of whole nations depends on more data centers, more GPU investment, more hype in the hype cycle.
So we get to the point in the UK where the official line is to shovel more and more bodies into the system, Soylent Green style.
One problem I hear with this new curriculum, largely supplied by Amazon, Google, Microsoft, is they will give ~20 minute talks, up to several hours for the paid courses, about how to register and go into their chosen "partners" (almost all USA based giants so far) and ask questions.
The NHS, the British Chambers of Commerce and the Local Government Association are among those who have committed to encouraging their staff and members to sign up.
The lessons are whole or in part supplied by their tech partners who build the tools. How much will be spent educating on the risks and dangers, I wonder?
Will the merit badges include "Successfully generated revenge pr0n of ex-girlfriend" or will it be restricted to "Clippy writes my emails now"?
As a techy who is expected to know about these things, I have been learning about AI, machine learning and computer vision for a long time.
There are uses for this stuff, excellent uses, but pushing and coercing people into using the tools where they are unhelpful or misleading will cause problems, and not just the horrors of unclothing women and girls.
Grok AI generated about 3m sexualised images in less than two weeks, including 23,000 that appear to depict children, according to researchers who said it "became an industrial-scale machine for the production of sexual abuse material".
The International Monetary Fund (IMF) has warned AI could affect nearly 40% of jobs, and worsen global financial inequality. Critics also highlight the tech's potential to reproduce biased information, or discriminate against some social groups.
The BBC was told in February that government plans to make the UK a "world leader" in AI could put already stretched supplies of drinking water under strain.
Generative AI systems are known for their ability to "hallucinate" and assert falsehoods as fact, even sometimes inventing sources for the inaccurate information.
Apple halted a new AI feature in January after it incorrectly summarised news app notifications.
The BBC complained about the feature after Apple's AI falsely told readers that Luigi Mangione - the man accused of killing UnitedHealthcare CEO Brian Thompson - had shot himself.
Google has also faced criticism over inaccurate answers produced by its AI search overviews.
Thousands of creators - including Abba singer-songwriter Björn Ulvaeus, writers Ian Rankin and Joanne Harris and actress Julianne Moore - in October 2024 calling AI a "major, unjust threat" to their livelihoods.
AI in schools and workplaces, where it is increasingly used to help summarise texts, write emails or essays and solve bugs in code.
There are worries about students using AI technology to "cheat" on assignments, or employees "smuggling" it into work.
Comments