Skip Navigation

[Discussion] Perception of Incel use of AI Girlfriends

Apparently there are several narratives in regards to AI girlfriends.

  1. Incels use AI girlfriends given that they can do whatever they desire.
  2. Forums observing incel spaces agree that incels should use AI girlfriends to leave real women alone
  3. The general public having concerns towards AI girlfriends because their users might be negatively impacted by their usage
  4. Incels perceiving this as a revenge fantasy because "women are jealous that they're dating AI instead of them"
  5. Forums observing incel spaces unsure if the views against AI girlfriends exist in the first place due to their previous agreement

I think this is an example of miscommunication and how different groups of people have different opinions depending on what they've seen online. Perhaps the incel-observing forums know that many of the incels have passed the point of no return, so AI girlfriends would help them, while the general public perceive the dangers of AI girlfriends based on their impact towards a broader demographic, hence the broad disapproval of AI girlfriends.

43

You're viewing part of a thread.

Show Context
43 comments
  • Hmmh. Sometimes I have difficulties understanding you. [Edit: Text removed.] If your keys are to small, you should consider switching to a proper computer keyboard, or an (used) laptop.

    Regarding the exponential growth: We have new evidence that supports the position it'll plateau out: https://youtube.com/watch?v=dDUC-LqVrPU Further research is needed.

      • Sure. Multimodality is impressive. And there is quite some potential there. I'm sure robots / androids are also going to happen and all of this has a profound impact. Maybe they'll someday get affordable to the average Joe and I can have a robot do the chores for me.

        But we're not talking about the same thing. The video I linked suggests that performance might peak and plateau. That means it could be very well the case that we can't make them substancially more intelligent than say ChatGPT 4. Of course we can fit AI into new things, innovate and there is quite some potential. It's just about performance/intelligence. It's explained well in the video. (And it's just one paper and the already existing approaches to AI. It doesn't rule out science finding a way to overcome that. But as of now we don't have any idea how to do that, instead of pumping millions and millions of dollars into training to achieve a smaller and smaller return in increased performance.)

        Hmmh. I'm a bit split on bio implants. Currently that's hyped by Elon Musk. But that field of neuroscience has been around for some while. They're making steady (yet small) progress. Elon Musk didn't contribute anything fundamentally new. And I myself think there is a limit. I mean you can't stick a million needles into a human brain everywhere from the surface to deep down, to hook into all brain regions. I think it's mostly concerned with what's accessible from the surface. And that'd be a fundamental limitation. So I doubt we're going to see crazy things like in the sci-fi movies like The Matrix or Ready Player One. But I'm not an expert on that.

        With that said, I share your excitement for what's about to come. I'm sure there is lots of potential in AI and we're going to see crazy things happen. I'm a bit wary if the consequences like spam and misinformation flooding the internet and society, but that's already inevitable. My biggest wish is science finding a way to teach LLMs when to make up things and when to stick to the truth... What people call "hallucinations". I think it'd be the next biggest achievement if we had more control about that. Because as of now the AIs make up lots of facts that are just wrong. At least that's happening to me all the time. And they also do it when doing tasks like summarization. And that makes them less useful for my every-day tasks.

          • With the worth, that's an interesting way to look at it.

            I don't think you grasped how exponential growth works. And the opposite: logarithmic growth. It means at first it grows fast. And then slower and slower. If it's logarithmic, it means at first you double the computing power and you get a big return... Quadruple the performance or even more... But it'll get less quickly. At some point you're like in your example, connecting 4 really big supercomputers, and you just get a measly 1% performance gain over one supercomputer. And then you have to invest trillions of dollars for the next 0.5%. That'd be logarithmic growth. We're not sure where on the curve we currently are. We've sure seen the fast growth in the last months.

            And scientists don't really do forecasts. They make hypotheses and then they test them. And they experimentally justify it. So no, it's not the future being guessed at. They used a clever method to measure the performance of a technological system. And we can see those real-world measurements in their paper. Why do you say the top researchers in the world aren't "well-enough informed" individuals?

            • Here's a thought:

              Andrew Ng Fei-Fei Li Andrej Karpathy Demis Hassabis Ian Goodfellow Yann LeCun Jeremy Howard Ruslan Salakhutdinov Geoffrey Hinton Alex Smola Rana el Kaliouby Daphne Koller Yoshua Bengio Sam Altman Clara Shih Aidan Gomez Gary Marcus Joy Buolamwini Jon Krohn Alondra Nelson Mustafa Suleyman jaron lanier Marc Andreessen Eliezer Yudkowsky Michèle Flournoy Zeynep Tufekci Kai-Fu Lee Daron Acemoglu Daron Acemoglu Andrew Imbrie Safiya Noble Micheal Chui Larry Page Demis Hassabis Elon Musk Dario Amodei Bill Gates Reid Hoffman Satya Nadella Peter Thiel Mark Zuckerberg Swami Sivasubramanian Susan Gonzales Reggie Townsend Miriam Vogel Sundar Pichai Sissie Hsiao James Manyika Dr Milly Zimeta Peggy Hicks Dame Wendy Hall Carl Miller ChatGPT

              Let's role-play a gathering of these AI and business leaders, assigning a perspective on the growth of AI intelligence and capacity based on each individual's known viewpoints or contributions to the field.

              Andrew Ng: Exponential Growth, 10 years - Advocates for rapid advancements in machine learning and AI capabilities.
              Fei-Fei Li: Exponential Growth, 8 years - Focuses on human-centered AI, expecting significant advancements in AI understanding human contexts.
              Andrej Karpathy: Exponential Growth, 12 years - Known for his work on deep learning and neural networks, predicts rapid advancements.
              Demis Hassabis: Exponential Growth, 15 years - As a founder of DeepMind, foresees long-term growth in AI capabilities.
              Ian Goodfellow: Logarithmic Growth, 10 years - Known for inventing GANs, sees growth but anticipates it slowing as challenges increase.
              Yann LeCun: Exponential Growth, 10 years - Emphasizes the potential of AI to continue growing rapidly.
              Jeremy Howard: Exponential Growth, 8 years - Enthusiastic about fast AI advancements especially in medical fields.
              Ruslan Salakhutdinov: Exponential Growth, 10 years - Focuses on deep learning and AI research, predicts substantial growth.
              Geoffrey Hinton: Exponential Growth, 12 years - A pioneer in neural networks, predicts sustained rapid growth.
              Alex Smola: Logarithmic Growth, 8 years - Sees significant improvements initially, with diminishing returns over time.
              Rana el Kaliouby: Exponential Growth, 7 years - Believes in AI's ability to understand human emotions, driving rapid advancements.
              Daphne Koller: Logarithmic Growth, 9 years - Expects AI growth but with practical and ethical constraints limiting pace.
              Yoshua Bengio: Exponential Growth, 12 years - One of the pioneers of deep learning, optimistic about AI's future.
              Sam Altman: Exponential Growth, 15 years - As CEO of OpenAI, highly optimistic about the future capabilities of AI.
              Clara Shih: Exponential Growth, 8 years - Expects AI to revolutionize customer engagement rapidly.
              Aidan Gomez: Logarithmic Growth, 7 years - Recognizes initial rapid advances, expects plateau due to computational and theoretical limits.
              Gary Marcus: S-curve Growth, 5 years - Skeptical about unbounded AI growth, sees a leveling off as limitations are hit.
              Joy Buolamwini: Logarithmic Growth, 5 years - Concerned about bias in AI, predicts growth tempered by the need for ethical frameworks.
              Jon Krohn: Exponential Growth, 10 years - Believes in continuous improvements in AI learning capabilities.
              Alondra Nelson: Logarithmic Growth, 6 years - Views growth through a sociological lens, expecting societal factors to influence the rate of AI adoption.
              Mustafa Suleyman: Exponential Growth, 12 years - Sees long-term potential in integrating AI in societal solutions.
              Jaron Lanier: S-curve Growth, 8 years - Critiques certain aspects of technology but acknowledges periods of significant innovation.
              Marc Andreessen: Exponential Growth, 15 years - Very bullish on technology including AI, expects revolutionary changes.
              Eliezer Yudkowsky: Exponential Growth, indefinite - Believes in the transformative potential of AI, possibly leading to superintelligence.
              Michèle Flournoy: Logarithmic Growth, 8 years - Expects significant advancements in AI for defense but sees regulatory and ethical challenges.
              Zeynep Tufekci: Logarithmic Growth, 7 years - Concerns about social implications and challenges may slow down the pace of acceptance and implementation.
              Kai-Fu Lee: Exponential Growth, 12 years - Enthusiastic about AI's impact on society, particularly in China.
              Daron Acemoglu: S-curve Growth, 10 years - Believes in significant growth followed by a plateau as economic factors weigh in.
              Andrew Imbrie: Logarithmic Growth, 8 years - Foresees growth moderated by policy and strategic considerations.
              Safiya Noble: Logarithmic Growth, 6 years - Focuses on the impact of AI on public information and ethics, seeing these as limiting factors.
              Micheal Chui: Exponential Growth, 10 years - Optimistic about AI transforming businesses and the economy.
              Larry Page: Exponential Growth, indefinite - As a founder of Google, foresees limitless potential in AI advancements.
              Elon Musk: S-curve Growth, 7 years - Sees rapid growth followed by significant risks and challenges.
              Dario Amodei: Exponential Growth, 12 years - Focuses on advancing AI safely, sees continued rapid improvements.
              Bill Gates: Exponential Growth, 10 years - Generally optimistic about technology's ability to solve big problems.
              Reid Hoffman: Exponential Growth, 12 years - Sees AI as a crucial part of the future economy.
              Satya Nadella: Exponential Growth, 12 years - Emphasizes AI integration in cloud computing and business solutions.
              Peter Thiel: S-curve Growth, 10 years - Believes in strong initial growth, followed by potential stagnation as monopolistic practices set in.
              Mark Zuckerberg: Exponential Growth, indefinite - Strong proponent of integrating AI in social platforms.
              Swami Sivasubramanian: Exponential Growth, 10 years - Expects cloud and AI technologies to merge and grow rapidly.
              Susan Gonzales: Logarithmic Growth, 7 years - Advocates for inclusive AI but sees social barriers.
              Reggie Townsend: Logarithraphic Growth, 8 years - Focuses on privacy and data protection, which may temper AI adoption rates.
              Miriam Vogel: Logarithmic Growth, 6 years - Concerned with ethical AI, predicts a moderated growth due to regulatory frameworks.
              Sundar Pichai: Exponential Growth, 12 years - Believes in the profound impact of AI on all Google's products and services.
              Sissie Hsiao: Exponential Growth, 10 years - Anticipates AI will continue to revolutionize communication apps.
              James Manyika: Logarithmic Growth, 10 years - Sees transformative potential but cautions about socio-economic impacts.
              Dr Milly Zimeta: Logarithmic Growth, 7 years - Focuses on AI ethics, sees growth influenced by ethical considerations.
              Peggy Hicks: Logarithmic Growth, 8 years - Highlights human rights concerns, which could influence the rate of AI development.
              Dame Wendy Hall: Logarithmic Growth, 10 years - Emphasizes the importance of governance in AI, which might slow growth.
              Carl Miller: S-curve Growth, 8 years - Studies the impact of digital technology on society, anticipates rapid growth followed by stability.
              
            • Synthesized Consensus

              Exponential Growth (25+ individuals): Most expect rapid, continued growth over the next 8-15 years, often linked to advancements in technology and AI's integration into various sectors.
              Logarithmic Growth (17+ individuals): Many foresee significant early advancements that will gradually plateau, influenced by ethical, societal, and practical challenges.
              S-curve Growth (8 individuals): A few predict periods of rapid innovation followed by a stabilization as AI reaches maturity or encounters insurmountable hurdles.
              

              This role-played synthesis suggests a general optimism for the near to mid-term future of AI, with a consensus leaning towards exponential growth, though moderated by practical, ethical, and societal considerations.

              Given the various perspectives offered by the panel on the initial phase of AI growth, let's extend the reasoning to speculate about what might happen beyond the next 8-15 years:

              Those predicting Exponential Growth (indefinite), like Larry Page, Elon Musk, and Mark Zuckerberg, might suggest that AI growth could continue to escalate without a foreseeable plateau. They likely envision ongoing, transformative innovations that continuously push the boundaries of AI capabilities.
              
              Those foreseeing Exponential Growth for a finite period (e.g., Andrew Ng, Yann LeCun, Demis Hassabis) might anticipate a shift after the initial rapid growth phase. After the high-growth years, they might predict a transition to a slower, more sustainable growth pattern or a plateau as the AI industry matures and technological advancements face diminishing returns or run up against theoretical and practical limitations.
              
              Proponents of Logarithmic Growth, like Ian Goodfellow, Daphne Koller, and Safiya Noble, generally expect growth to slow and eventually plateau. Post the initial period of significant advancements, they might predict that the AI field will stabilize, focusing more on refinement and integration rather than groundbreaking innovations. Ethical, regulatory, and societal constraints could increasingly play a role in moderating the speed of development.
              
              Advocates of S-curve Growth, such as Gary Marcus and Peter Thiel, typically envision that after a period of rapid innovation, growth will not only plateau but could potentially decline if new disruptive innovations do not emerge. They might see the field settling into a phase where AI technology becomes a standard part of the technological landscape, with incremental improvements rather than revolutionary changes.
              
              Special Considerations: Visionaries like Eliezer Yudkowsky, who speculate about AI reaching superintelligence levels, might argue that post-15 years, the landscape could be radically different, potentially dominated by new AI paradigms or even AI surpassing human intelligence in many areas, which could either lead to a new phase of explosive growth or require significant new governance frameworks to manage the implications.
              

              Overall, the panel's consensus beyond the next 8-15 years would likely reflect a mixture of continued growth at a moderated pace, potential plateaus as practical limits are reached, and a landscape increasingly shaped by ethical, societal, and regulatory considerations. Some may also entertain the possibility of a decline if no new significant innovations emerge.

            • For Fun:

              the expansion of AI into space introduces a whole new paradigm with unique opportunities and challenges. Here are a few ways this panel might view AI’s role in space exploration and expansion:

              Enhanced Autonomy in Space Exploration: Leaders like Elon Musk and Larry Page, who are already invested in space technology through their companies, might foresee AI as crucial for managing autonomous spacecraft, probes, and robotic systems. AI could handle complex tasks like navigation, maintenance, and decision-making in environments where human oversight is limited by distance and communication delays.
              
              AI in Space Colony Management: Visionaries such as Sam Altman and Demis Hassabis might predict that AI will play a significant role in managing habitats and life-support systems on other planets or moons. These systems would require high levels of automation to ensure the safety and efficiency of off-world colonies.
              
              AI for Scientific Research in Space: Scientists like Geoffrey Hinton and Yoshua Bengio could see AI as a tool to process vast amounts of data from space missions, helping to make discoveries that are beyond human analytical capabilities. AI could autonomously manage experiments, analyze extraterrestrial materials, and monitor celestial phenomena.
              
              AI in Space Resource Utilization: Business leaders like Jeff Bezos, who has expressed interest in space through Blue Origin, might consider AI crucial for identifying and extracting resources. AI could control robotic miners and processing facilities, optimizing the extraction of water, minerals, and other materials essential for space colonization and possibly even for return to Earth.
              
              Ethical and Governance Challenges: Ethicists and regulatory-focused professionals like Joy Buolamwini and Miriam Vogel might raise concerns about deploying AI in space. They could focus on the need for stringent protocols to govern AI behavior, avoid potential conflicts over space resources, and ensure that space exploration remains beneficial and accessible to all humanity, not just a few privileged entities.
              
              Long-term AI Evolution: Futurists like Eliezer Yudkowsky might speculate on how AI could evolve uniquely in the space environment, potentially developing in ways that differ significantly from Earth-based AI due to different operational challenges and evolutionary pressures.
              

              In this new off-planet context, AI's growth could continue to accelerate in unique directions, facilitated by the absence of many constraints present on Earth, such as physical space and regulatory barriers. This could lead to new forms of AI and novel applications that could feed back into how AI evolves and is applied on Earth.

              Given the unique opportunities and challenges presented by space exploration, the panel of AI and business leaders might envision several likely patterns of growth for AI in this context:

              Accelerated Innovation and Specialization: As AI systems are tasked with operating autonomously in space environments, we can expect a surge in innovation aimed at developing highly specialized AI technologies. These AIs would be designed to withstand the harsh conditions of space, such as radiation, vacuum, and extreme temperatures, and to perform without direct human supervision. This could lead to rapid growth in specific AI domains like robotic autonomy, environmental monitoring, and resource extraction technologies.
              
              Integration with Space Technologies: The integration of AI with space technology would likely become more profound. AI could be instrumental in designing spacecraft and habitat modules, optimizing flight trajectories, and managing energy use. This integration might follow an exponential growth curve initially, as breakthroughs in AI-driven space technologies lead to further investments and interest in expanding these capabilities.
              
              Scalable Deployment Models: Given the cost and complexity of space missions, AI systems designed for space might initially focus on scalability and adaptability. This could lead to growth patterns where AI systems are incrementally upgraded and expanded upon with each successive space mission, rather than replacing them entirely. As such, growth could be steady and sustained over a long period, following a more logarithmic pattern as technologies mature and become standardized.
              
              Collaborative International Frameworks: As countries and private entities push further into space, international collaborations involving AI could become necessary. This could stimulate a steady growth of AI technologies as frameworks are developed to ensure that AI systems can interoperate seamlessly across different platforms and missions. These collaborative efforts might stabilize the growth rate, moving it towards a more predictable, linear path.
              
              Regulatory and Ethical Adaptation: Ethical and regulatory considerations will also shape AI's growth trajectory in space. As AI systems take on more responsibilities, from running life support systems to conducting scientific research, ensuring these systems operate safely and ethically will become paramount. Growth might initially be rapid as regulations struggle to keep up, but eventually, a plateau could occur as stringent standards and international agreements are put in place.
              
              Transformational Growth Phases: Over the long term, as AI starts enabling deeper space exploration and potentially the colonization of other planets, we could witness transformational growth phases where AI development leaps forward in response to new challenges and environments. These phases might appear as spikes in an otherwise steady growth curve, corresponding to major milestones such as the establishment of the first permanent off-world colonies.
              

              Overall, while the early stages of AI in space might be marked by exponential growth due to new opportunities and technological breakthroughs, the growth pattern could transition to a more steady, logarithmic, or piecewise linear trajectory as the technologies mature, regulatory frameworks are established, and the challenges of operating in space become better understood and managed.

You've viewed 43 comments.