How much would we save if we'd somehow be able to debloat and deshittify the Internet and all devices? Climate impact, overconsumption of unnecessary crap, mental health care...
An interesting topic but the article has virtually no information on it and what was there was unsourced and confusing. Maybe I'm just tired and not seeing it but damn, the taking 50 Belgiuns to the moon comparison really got me confused. I agree in general though, new technologies take energy and we need to decarbonise our energy generation as quickly as possible.
I'd actually be really interested in an actual deep dive into this topic though. What kind of tasks are people using these assistants for and how does energy use of an assistant compare with how people would do that before? I'm sure it's more energy intensive but it'd be interesting to understand more at least for me.
I agree that the article is a bit confusing, but we can't keep increasing energy consumption and hope decarbonization will fix it.
From an environmental point of view energy is never free. Also as long as we still use fossil fuels, any new usage of renewable (e.g. run AI on solar panels) is energy that could have been use to replace fossil usage.
What do you propose, exactly? We have the technology right now to decarbonise our grid, it's even the sensible move economically now. Are you saying we should all stop having kids and building anything new that uses electricity? I'm assuming that's not your position but that's what I took from reading your comment.
I have no idea, that's kind of my point. I'm not trying to argue that it's not much, or that it's a lot, or that it's worth it or not, just saying I have no idea and neither that article nor any of the ones you linked gave me the answer.
I think it's an important consideration, so I'd love more information but it seems that it's not available. Maybe it's hard to calculate because things like the energy used and exact amount of compute are trade secrets or something, I don't know. It'd be nice to know though.
I don't really care about other commenters saying that the article doesn't have a reliable enough source. I know that commercial LLMs are terrible resource consumers and since I don't support their development I think they should be legally banned for this very reason.
That is a very valid and reasonable opinion, sorry to see it downvoted.
There will be strong disagreement with you, however, on the case that LLMs are a big enough resource hog to require outright banning for just that reason.
If you are looking for Big Tech hit boxes, try for things like writing laws that require all energy consumption in datacenters to be monitered and reported using established cross-disciplinary methods.
Or getting people to stop buying phones every year. Or banning disposable vapes.
I knew it's going to be downvoted. People here mostly support AI. But I don't and what I meant is that I just would love the governments to ban it (obviously). The energy efficiency is the most simple reason to tell them so yea. Sorry everyone but I'm old schooled. Put your fancy AI bells and whistles away and embrace efficient, old and proven ways of computing such as using GUI, TTY and search engines (that still consume a lot but not as inefficiently). They at least don't consume 10 MW (or a few seconds of full load CPU time and 200Gb of space if it's a local LLM) to calculate 2+2*2 or give you a link to a Wikipedia article that explains what a helicopter is (coughcough Bing coughcough). And they hallucinate way less often too.
You can't ban LLMs at this point, they're too useful, it's impossible to track their use, they could be run anywhere on the globe, and even open source models that you can run locally exist.
Local LLMs are not that bad. Of course they're 100x less efficient than a native calculator or search engine but very few % of people use them and tracking will probably use even more energy so it's not that big of a deal. I don't have much against research of AI so training is quite justified too (in terms of energy, not using data without permission). It's only large commercial cloud-based solutions with enormous infrastructures that should probably be banned
I wonder how they measured this. Could it just be that they get more utilisation? Even per capita is probably not adequate either.
You would need a measure that's an analogue of per capita. Maybe per result? For instance I could spend half an hour attempting to get just the right set of keywords to bring up the right result, or I could spend 5 minutes in a chat session with an AI honing the correct response.
The wording of the article implies an apples to apples comparison. So 1 Google search == 1 question successfully answered by an LLM. Remember a Google Search in layspeak is not the act of clicking on the search button, rather it's the act of going to Google to find a website that has information you want. The equivalent with ChatGPT would be to start a "conversation" and getting information you want on a particular topic.
How many search engine queries, or LLM prompts that involves, or how broad the topic, is a level of technical detail that one assumes the source for the number x25 has already controlled for (Feel free to ask the author for the source and share with us though!)
Anyone who's remotely used any kind of deep learning will know right away that deep learning uses an order of magnitude or two more power (and an order of magnitude or two more performance!) compared to algorithmic and rules based software, and a number like x25 for a similar effective outcome would not at all be surprising, if the approach used is unnecessarily complex.
For example, I could write a neural network to compute 2+2, or I could use an arithmetic calculator. One requires a 500$ GPU consuming 300 watts, the other a 2$ pocket calculator running on 5 watts, returning the answer before the neural network is even done booting.
However many years it takes for these LLM fools to wake up, hopefully they can find a way to laugh at themselves for thinking that it was cutting-edge to jam the internet into a fake jellyfish brain and calling it GPT. I haven’t looked recently, but I still haven’t seen anyone talking about neuroglial networks and how they will revolutionize the applications for AI.
There’s a big*** book, but apparently no public takers in the deep neural network space?
At this point I basically need to do 25 Google searches to find what I’m looking for anyway. This is a stupid comparison. When I eat cabbage and beer my digestive tract releases more GHGs than my whole day of using ChatGPT (zero). I just need to figure out how to harvest and burn my own methane so I can do more ChatGPT queries guilt-free.