Machine Learning | Artificial Intelligence
- “AI” Hurts Consumers and Workers -- and Isn’t Intelligenttechpolicy.press “AI” Hurts Consumers and Workers -- and Isn’t Intelligent
Researchers Alex Hanna and Emily M. Bender call on businesses not to succumb to this artificial “intelligence” hype.
cross-posted from: https://lemmy.ml/post/2811405
> "We view this moment of hype around generative AI as dangerous. There is a pack mentality in rushing to invest in these tools, while overlooking the fact that they threaten workers and impact consumers by creating lesser quality products and allowing more erroneous outputs. For example, earlier this year America’s National Eating Disorders Association fired helpline workers and attempted to replace them with a chatbot. The bot was then shut down after its responses actively encouraged disordered eating behaviors. "
- New AI systems collide with copyright lawwww.bbc.co.uk New AI systems collide with copyright law
Artists are worried that their work is being fed into AI systems and are taking legal action.
- Looking for resources on music generation
I am an ML engineer/researcher but have never looked into music before. Some quick googling gives plenty of websites doing automatic music generation but not sure what methods/ architectures are being used. I'm sure I could find papers with more searching but hoping someone can give me a summary of current SOTA and maybe some links to code/models to get started on.
- Discussion of llama source code
where can i go to learn about and discuss facebook's llama 2 source code? there aren't many comments in the code.
- What tools/libraries do you for MLOps?
The MLOps community is flooding tools and pipeline orchestration tools. What does your stack look like?
- Almost All Research on the Mind is in English. That May Be a Problemwww.wired.com Almost All Research on the Mind Is in English. That’s a Problem
Language can shape how you think in subtle and profound ways. But most researchers only study English speakers.
- Large language models encode clinical knowledgewww.nature.com Large language models encode clinical knowledge - Nature
Med-PaLM, a state-of-the-art large language model for medicine, is introduced and evaluated across several medical question answering tasks, demonstrating the promise of these models in this domain.
An update on Google's efforts at LLMs in the medical field.
- Google’s language model “NotebookLM” app hits public testingarstechnica.com Google’s language model “NotebookLM” app hits public testing
Instead of Internet knowledge, NotebookLM's chatbot is based on a source document.
- Generative AI Goes 'MAD' When Trained on AI-Created Data Over Five Timeswww.tomshardware.com Generative AI Goes 'MAD' When Trained on AI-Created Data Over Five Times
Generative AI goes "MAD" after five training iterations on artificial outputs.
- New ChatGPT rival, Claude 2, launches for open beta testingarstechnica.com New ChatGPT rival, Claude 2, launches for open beta testing
US and UK users can converse with Claude 2 through the Anthropic website.
- GPT-4 API general availability and deprecation of older models in the Completions APIopenai.com GPT-4 API general availability and deprecation of older models in the Completions API
GPT-3.5 Turbo, DALL·E and Whisper APIs are also generally available, and we are releasing a deprecation plan for older models of the Completions API, which will retire at the beginning of 2024.
- Great series by Andrej Karpathy on machine learning and training
Great series on machine learning. Posting for anyone interested in more of the details on the AI's and LLM's and how they're built/trained.
- Adventures in AI Programming: Daily Experiments with GPT-4reticulated.net Adventures in AI Programming: Daily Experiments with GPT-4
Discovering the advantages, disadvantages, processes, and use cases for coding with GPT-4 by building something different every day
- A newbie question on neural network
In the hidden layer, the activation function will decide what is being determined by the neural network, is it possible for an AI to generate activation function for itself so it can improve upon itself?
- Training AI on other AI causes models to collapse (original title : The AI is eating itself)www.platformer.news The AI is eating itself
Early notes on how generative AI is affecting the internet
Hi lemmings, what do you think about this and do you see a parallel with the human mind ? > ... "A second, more worrisome study comes from researchers at the University of Oxford, University of Cambridge, University of Toronto, and Imperial College London. It found that training AI systems on data generated by other AI systems — synthetic data, to use the industry’s term — causes models to degrade and ultimately collapse" ...
- New ROCm™ 5.6 Release Brings Enhancements and Optimizations for AI and HPC Workloadscommunity.amd.com New ROCm™ 5.6 Release Brings Enhancements and Optimizations for AI and HPC Workloads
AMD to Add ROCm Support on Select RDNA™ 3 GPUs this Fall AI is the defining technology shaping the next generation of computing. In recent months, we have all seen how the explosion in generative AI and LLMs are revolutionizing the way we interact with technology and driving significantl...
cross-posted from: https://lemmy.world/post/811496
> Huge news for AMD fans and those who are hoping to see a real* open alternative to CUDA that isn't OpenCL! > > *: Intel doesn't count, they still have to get their shit together in rendering things correctly with their GPUs. > > >We plan to expand ROCm support from the currently supported AMD RDNA 2 workstation GPUs: the Radeon Pro v620 and w6800 to select AMD RDNA 3 workstation and consumer GPUs. Formal support for RDNA 3-based GPUs on Linux is planned to begin rolling out this fall, starting with the 48GB Radeon PRO W7900 and the 24GB Radeon RX 7900 XTX, with additional cards and expanded capabilities to be released over time.
- Full DragGAN source code is now released: Interactive Point-Based Manipulation of Imagesgithub.com GitHub - XingangPan/DragGAN: Official Code for DragGAN (SIGGRAPH 2023)
Official Code for DragGAN (SIGGRAPH 2023). Contribute to XingangPan/DragGAN development by creating an account on GitHub.
- MPT-30B: Raising the bar for open-source foundation modelswww.mosaicml.com MPT-30B: Raising the bar for open-source foundation models
Introducing MPT-30B, a new, more powerful member of our Foundation Series of open-source models, trained with an 8k context length on NVIDIA H100 Tensor Core GPUs.
and another commercially viable open-source LLM!
- MIT researchers make language models scalable self-learnersnews.mit.edu MIT researchers make language models scalable self-learners
MIT CSAIL researchers used a natural language-based logical inference dataset to create smaller language models that outperformed much larger counterparts.
TLDR Summary:
-
MIT researchers developed a 350-million-parameter self-training entailment model to enhance smaller language models' capabilities, outperforming larger models with 137 to 175 billion parameters without human-generated labels.
-
The researchers enhanced the model's performance using 'self-training,' where it learns from its own predictions, reducing human supervision and outperforming models like Google's LaMDA, FLAN, and GPT models.
-
They developed an algorithm called 'SimPLE' to review and correct noisy or incorrect labels generated during self-training, improving the quality of self-generated labels and model robustness.
-
This approach addresses inefficiency and privacy issues of larger AI models while retaining high performance. They used 'textual entailment' to train these models, improving their adaptability to different tasks without additional training.
-
By reformulating natural language understanding tasks like sentiment analysis and news classification as entailment tasks, the model's applications were expanded.
-
While the model showed limitations in multi-class classification tasks, the research still presents an efficient method for training large language models, potentially reshaping AI and machine learning.
-
- Accelerating Drug Discovery With the AI Behind ChatGPT – Screening 100 Million Compounds a Dayscitechdaily.com Accelerating Drug Discovery With the AI Behind ChatGPT – Screening 100 Million Compounds a Day
By applying a language model to protein-drug interactions, researchers can quickly screen large libraries of potential drug compounds. Huge libraries of drug compounds may hold potential treatments for a variety of diseases, such as cancer or heart disease. Ideally, scientists would like to exper
TLDR summary:
-
Researchers at MIT and Tufts University have developed an AI model called ConPLex that can screen over 100 million drug compounds in a day to predict their interactions with target proteins. This is much faster than existing computational methods and could significantly speed up the drug discovery process.
-
Most existing computational drug screening methods calculate the 3D structures of proteins and drug molecules, which is very time-consuming. The new ConPLex model uses a language model to analyze amino acid sequences and drug compounds and predict their interactions without needing to calculate 3D structures.
-
The ConPLex model was trained on a database of over 20,000 proteins to learn associations between amino acid sequences and structures. It represents proteins and drug molecules as numerical representations that capture their important features. It can then determine if a drug molecule will bind to a protein based on these numerical representations alone.
-
The researchers enhanced the model using a technique called contrastive learning, in which they trained the model to distinguish real drug-protein interactions from decoys that look similar but do not actually interact. This makes the model less likely to predict false interactions.
-
The researchers tested the model by screening 4,700 drug candidates against 51 protein kinases. Experiments confirmed that 12 of the 19 top hits had strong binding, including 4 with extremely strong binding. The model could be useful for screening drug toxicity and other applications.
-
The new model could significantly reduce drug failure rates and the cost of drug development. It represents a breakthrough in predicting drug-target interactions and could be further improved by incorporating more data and molecular generation methods.
-
The model and data used in this research have been made publicly available for other scientists to use.
-
- AI Translates 5000-Year-Old Cuneiform
A team from Israel has developed an AI model that translates Cuneiform, a 5000-year-old writing system, into English within seconds. This model, developed at Tel Aviv University, uses Neural Machine Translation (NMT) and has fairly good accuracy. Despite the complexity of the language and age, the AI was successfully trained and can now help to uncover the mysteries of the past. You can try an early demo of this model on The Babylon Engine and its source code is available on GitHub on Akkademia and the Colaboratory.
- 7 AI Companies That Could Become Trillion-Dollar Companiesmarkets.businessinsider.com 7 AI Companies That Could Become Trillion-Dollar Companies
InvestorPlace - Stock Market News, Stock Advice & Trading Tips As the world marches towards a future defined by artificial intelligence, the s...
- Meta AI Reveals Game-Changing I-JEPA: A Leap Forward in Self-Supervised Learning Mimicking Human Perception and Reasoning
Meta AI has revealed their first AI model, I-JEPA, which learns by comparing abstract representations of images, not the pixels. This self-supervised learning model fills in knowledge gaps in a way that mirrors human perception. I-JEPA is adaptable and efficient, offering robust performance even with a less complex model. Excitingly, the code for this pioneering technology is open-source. Check it out on GitHub!
- 13b parameter Orca LLM is redefining what small model LLM's are capable of.docs.kanaries.net Orca 13B: the New Open Source Rival for GPT-4 from Microsoft
Experience the cutting-edge Orca 13b AI model from Microsoft, now small enough to run on your laptop. Learn from GPT 4 and imitate reasoning processes with ease.