The National Archives is launching a public-facing AI chatbot called "Archie." Employees have concerns.
In June, the U.S. National Archives and Records Administration (NARA) gave employees a presentation and tech demo called “AI-mazing Tech-venture” in which Google’s Gemini AI was presented as a tool archives employees could use to “enhance productivity.” During a demo, the AI was queried with questions about the John F. Kennedy assassination, according to a copy of the presentation obtained by 404 Media using a public records request.
In December, NARA plans to launch a public-facing AI-powered chatbot called “Archie AI,” 404 Media has learned. “The National Archives has big plans for AI,” a NARA spokesperson told 404 Media. It’s going to be essential to how we conduct our work, how we scale our services for Americans who want to be able to access our records from anywhere, anytime, and how we ensure that we are ready to care for the records being created today and in the future.”
Employee chat logs given during the presentation show that National Archives employees are concerned about the idea that AI tools will be used in archiving, a practice that is inherently concerned with accurately recording history.
One worker who attended the presentation told 404 Media “I suspect they're going to introduce it to the workplace. I'm just a person who works there and hates AI bullshit.”
My partner is an archivist, and we've talked about AI a lot.
Most people in their field hate this shit because it undermines so much of what matters in their jobs. Accuracy is critical, and the presentation of the archive requires humans that understand it. History is complex, requires context and nuance, and understanding of basic ideas and concepts.
Using "AI" to parse and present the contents of the archive pollutes it, and gives the presentation over to software that can't possibly begin to understand the questions or the answers.
There are more than enough technological advantages in this field to help with digital archiving, adding LLM doesn't help anything.
All of which should be done by human beings. Period.
Currently, maybe. But technology is fantastic at accuracy, better than humans in many regards. Gemini might have a way to go before it gets there, but it or its successors will get there and it's moving fast.
Productivity is irrelevant here
I'm not sure it is. Productivity also refers to efficiency of services. If AI can make the services of the National Archives more productive for its staff and/or the public then surely that's a good thing?
But technology is fantastic at accuracy, better than humans in many regards.
This isn't about "technology", it's about large language models, which are neither "fantastic at accuracy" or "better than humans".
Gemini might have a way to go before it gets there, but it or its successors will get there and it's moving fast.
Large language models are structurally incapable of "getting there", because they are models of language, not models of thought.
And besides, anything that is smart enough to "get there" deserves human rights and fair compensation for the work they do, defeating the purpose of "AI" as an industry.
If AI can make the services of the National Archives more productive for its staff and/or the public then surely that's a good thing?
The word "If" is papering over a number of sins here.
Given that photocopiers can do a scribes job (copy the text on this page onto a new page), more quickly and accurately to boot, I presume you are part of a pressure group to pay them pensions.
Given that photocopiers can do a scribes job (copy the text on this page onto a new page),
That's not a scribe's job, that's not even the entirety of an apprentice scribe's job (which also includes making paper, making ink, bookbinding, etc.)
A scribe's job is to perform secretarial and administrative duties, everything from record-keeping and library management to the dictation and distribution of memoranda.
A photocopier is not capable of those things, but if it was then it'd be deserving of the same compensation and legal status afforded to the humans that currently do it.
I presume you are part of a pressure group to pay them pensions.
We have to start treating things that claim to be "AI" as deserving of human rights, or else things are going to get very ugly once it's possible to emulate scanned human brains in silicon.