Skip Navigation
Passenger plane crash in Brazil kills all 61 on board - CNA
  • This is truly a terrible accident. Given the flight tracking data and the cold, winter weather at the time, structural icing is likely to have caused the crash.

    Ice will increase an aircraft’s stall speed, and especially when an aircraft is flown with autopilot on in icing conditions, the autopilot pitch trim can end up being set to the limits of the aircraft without the pilots ever knowing.

    Eventually the icing situation becomes so severe that the stall speed of the ice-laden wing and elevator exceeds the current cruising speed and results in a aerodynamic stall, which if not immediately corrected with the right control inputs will develop into a spin.

    The spin shown in several videos is a terrifying flat spin. Flat spins develop from normal spins after just a few rotations. It’s very sad and unfortunate that we can hear that both engines are giving power while the plane is in a flat spin towards the ground. The first thing to do when a spin is encountered is to eliminate all sources of power as this will aggravate a spin into a flat spin.

    Once a flat spin is encountered, recovery from that condition is not guaranteed, especially in multi-engine aircraft where the outboard engines create a lot of rotational inertia.

  • What do you think of this prediction?
  • Valve is a unique company with no traditional hierarchy. In business school, I read a very interesting Harvard Business Review article on the subject. Unfortunately it’s locked behind a paywall, but this is Google AI’s summary of the article which I confirm to be true from what I remember:

    According to a Harvard Business Review article from 2013, Valve, the gaming company that created Half Life and Portal, has a unique organizational structure that includes a flat management system called "Flatland". This structure eliminates traditional hierarchies and bosses, allowing employees to choose their own projects and have autonomy. Other features of Valve's structure include:

    • Self-allocated time: Employees have complete control over how they allocate their time
    • No managers: There is no managerial oversight
    • Fluid structure: Desks have wheels so employees can easily move between teams, or "cabals"
    • Peer-based performance reviews: Employees evaluate each other's performance and stack rank them
    • Hiring: Valve has a unique hiring process that supports recruiting people with a variety of skills
  • Trump floats eliminating U.S. income tax and replacing it with tariffs on imports
  • Someone did the math and realized we would need a 130% tariff on all goods to replace current income tax revenue.

    People’s number one concern is inflation. If that tariff is created we will see 100% inflation over night!

  • Google Is Paying Reddit $60 Million for Fucksmith to Tell Its Users to Eat Glue
  • You do realize that every posted on the Fediverse is open and publicly available? It’s not locked behind some API or controlled by any one company or entity.

    Fediverse is the Wikipedia of encyclopedias and any researcher or engineer, including myself, can and will use Lemmy data to create AI datasets with absolutely no restrictions.

  • Humor
  • It took Hawking minutes to create some responses. Without the use of his hand due to his disease, he relied on the twitch of a few facial muscles to select from a list of available words.

    As funny as it is, that interview, or any interview with Hawkins contains pre-drafted responses from Hawking and follows a script.

    But the small facial movements showing his emotion still showed Hawking had fun doing it.

  • What is a good eli5 analogy for GenAI not "knowing" what they say?
  • To add to this insight, there are many recent publications showing the dramatic improvements of adding another modality like vision to language models.

    While this is my conjecture that is loosely supported by existing research, I personally believe that multimodality is the secret to understanding human intelligence.

  • What is a good eli5 analogy for GenAI not "knowing" what they say?
  • I am an LLM researcher at MIT, and hopefully this will help.

    As others have answered, LLMs have only learned the ability to autocomplete given some input, known as the prompt. Functionally, the model is strictly predicting the probability of the next word+, called tokens, with some randomness injected so the output isn’t exactly the same for any given prompt.

    The probability of the next word comes from what was in the model’s training data, in combination with a very complex mathematical method to compute the impact of all previous words with every other previous word and with the new predicted word, called self-attention, but you can think of this like a computed relatedness factor.

    This relatedness factor is very computationally expensive and grows exponentially, so models are limited by how many previous words can be used to compute relatedness. This limitation is called the Context Window. The recent breakthroughs in LLMs come from the use of very large context windows to learn the relationships of as many words as possible.

    This process of predicting the next word is repeated iteratively until a special stop token is generated, which tells the model go stop generating more words. So literally, the models builds entire responses one word at a time from left to right.

    Because all future words are predicated on the previously stated words in either the prompt or subsequent generated words, it becomes impossible to apply even the most basic logical concepts, unless all the components required are present in the prompt or have somehow serendipitously been stated by the model in its generated response.

    This is also why LLMs tend to work better when you ask them to work out all the steps of a problem instead of jumping to a conclusion, and why the best models tend to rely on extremely verbose answers to give you the simple piece of information you were looking for.

    From this fundamental understanding, hopefully you can now reason the LLM limitations in factual understanding as well. For instance, if a given fact was never mentioned in the training data, or an answer simply doesn’t exist, the model will make it up, inferring the next most likely word to create a plausible sounding statement. Essentially, the model has been faking language understanding so much, that even when the model has no factual basis for an answer, it can easily trick a unwitting human into believing the answer to be correct.

    —-

    +more specifically these words are tokens which usually contain some smaller part of a word. For instance, understand and able would be represented as two tokens that when put together would become the word understandable.

  • Tesla is being investigated by the DOJ for securities and wire fraud by making misleading self-driving claims
  • Agreed.

    Nevertheless, the Federal regulators will have an uphill battle as mentioned in the article.

    Neither "puffery" nor "corporate optimism" counts as fraud, according to US courts, and the DOJ would need to prove that Tesla knew its claims were untrue.

    The big thing they could get Tesla on is the safety record for autosteer. But again there would need to be proof it was known.

  • Tesla is being investigated by the DOJ for securities and wire fraud by making misleading self-driving claims
  • I am a pilot and this is NOT how autopilot works.

    There is some autoland capabilities in the larger commercial airliners, but autopilot can be as simple as a wing-leveler.

    The waypoints must be programmed by the pilot in the GPS. Altitude is entirely controlled by the pilot, not the plane, except when on a programming instrument approach, and only when it captures the glideslope (so you need to be in the correct general area in 3d space for it to work).

    An autopilot is actually a major hazard to the untrained pilot and has killed many, many untrained pilots as a result.

    Whereas when I get in my Tesla, I use voice commands to say where I want to go and now-a-days, I don’t have to make interventions. Even when it was first released 6 years ago, it still did more than most aircraft autopilots.

  • For those thinking of going back to reddit. Gaze upon this comment section and reconsider.
  • AFAIK, there’s nothing stopping any company from scraping Lemmy either. The whole point pf reddit limiting API usage was so they could make money like this.

    Outside of morals, there is nothing to stop anybody from training on data from Lemmy just like there’s nothing stopping me from using Wikipedia. Most conferences nowadays require a paragraph on ethics in the submission, but I and many of my colleagues would have no qualms saying we scraped our data from open source internet forums and blogs.

  • What the process for people getting their driving licenses where you live?
  • I'm convinced that we should use the same requirements to fly an airplane as driving a car.

    As a pilot, there are several items I need to log on regular intervals to remain proficient so that I can continue to fly with passengersor fly under certain conditions. The biggest one being the need for a Flight Review every two years.

    If we did the bare minimum and implemented a Driving Review every two years, our roads would be a lot safer, and a lot less people would die. If people cared as much about driving deaths as they did flying deaths, the world would be a much better place.

  • 3 Dead in Massachusetts Plane Crash
    www.cbsnews.com 3 people killed in western Massachusetts small plane crash

    Three people were killed Sunday during a small plane crash in western Massachusetts on the border of Greenfield and Leyden.

    3 people killed in western Massachusetts small plane crash

    Aircraft’s last known position and speed show it climbing with decreasing speed. Based on the small loops shown, this was likely a training flight or proficiency check. It can be assumed the aircraft was placed into an intentional stall for training or VMC demo, but quickly departed controlled flight for an unknown reason. It was very windy in Massachusetts (up to 50 mph at altitude) and wind shear may have also been a factor.

    According to online aviation blogs, those who knew the pilots say that two of the fatally injured occupants were experienced senior instructors.

    https://www.flightaware.com/live/flight/N7345R

    1
    Removed
    77 Groups Worldwide Back Genocide Lawsuit Against Biden in U.S. Court. The Biden administration is due in federal court later this month, while Israel faces charges of genocide at The Hague this week.
  • I hate that I am defending Israel when I say this because what is occurring in Gaza is tragic, but a lot of people are confusing "Genocide" for perceived "War Crimes" as defined by international law and also confusing "Hamas" for "Palestine" or the "Palestinian Authority".

    Hamas is terrorist government (similar in nature to the Taliban) that receives a lot of external funding from countries that actively wish to see the death of Israel and all Jews, making Hamas the chief perpetrators of Genocide in this conflict despite how ineffective they have been in their goals.

    Israel was attacked by this terrorist government, and is now defending itself with the expressed war goal of destroying Hamas. While Israel has had a tenuous relationship with the Palestinian people (namely the government's active efforts to limit the Palestinian Authority and drag their feet on grant the PA more autonomy and their own state which is deplorable and inexcusable), they do not and have not wished to kill an entire culture of people.

    Complicating matters, Hamas commonly employs warfare techniques that go against the Geneva Convention like placing government and military headquarters in basements of protected buildings like Hospitals and places of worship. The moment they do that, and abuse those international recognized sanctuaries, they become legitimate military targets leading to the tragic deaths of unwitting civilians.

    People can object to the war on the grounds that war is tragic and results in many civilian casualties, but to make meritless claims is detrimental to both international institutions and to the definition of a Genocide. South Africa calls what Israel is doing a genocide, but also explicitly looks the other way with Ukraine and continues to forge close ties with Putin? (For the record, Russia's actions in Ukraine are also not considered genocide under it's strict international definition, but they have been found guilty of war crimes).

    Israel has an internationally recognized right to defend itself, and it is doing that by dismantling Hamas through force. The Palestinian people are unfortunately caught in the crossfire. With that said, Israel's methods to this end are not above criticism, and they have faced pressure from the US and Biden to limit civilian casualties wherever possible, and use ground forces to directly attack Hamas rather than relying on airstrikes that have resulted in many innocent deaths.

    For those reading who think all war is bad, I'll leave you with this quote from John Stuart Mills:

    War is an ugly thing, but not the ugliest of things: the decayed and degraded state of moral and patriotic feeling which thinks that nothing is worth a war, is much worse. When a people are used as mere human instruments for firing cannon or thrusting bayonets, in the service and for the selfish purposes of a master, such war degrades a people. A war to protect other human beings against tyrannical injustice; a war to give victory to their own ideas of right and good, and which is their own war, carried on for an honest purpose by their free choice, — is often the means of their regeneration. A man who has nothing which he is willing to fight for, nothing which he cares more about than he does about his personal safety, is a miserable creature who has no chance of being free, unless made and kept so by the exertions of better men than himself. As long as justice and injustice have not terminated their ever-renewing fight for ascendancy in the affairs of mankind, human beings must be willing, when need is, to do battle for the one against the other.

  • $1.5 billion in estimated revenue: A look at the Massachusetts 'millionaire's tax' first year taxing the rich
  • You are looking at two different tax systems. The effective US tax rate (the rate you actually pay is much much less). Our household makes $300k per year, and we have a $650k net worth. Our income taxes every year? Less than 7% of that, which is absurdly low. The ultra wealthy are taxed even less than that. The US is propped up by taxes from the middle-class because the more you makes, the easier it becomes to optimize and lower your effective tax rate. We need to tax the rich more.

  • World's richest 1% emit as much carbon as bottom two-thirds: report
  • The actual study claims that top 10% is $41k and accounts for 50% of carbon emissions. No where does it normalize incomes for those from Kenya as the article claims. So these incomes are viewed globally. If you are in the US and make more than $20/hr hours a week, you are top 10%.

    $67/hr makes you top 1%.

    Others are calling to eat the rich without realizing that the global rich includes low wage earners flipping burgers at McDonald's (I'm in Boston and minimum wage is $15/hr and an assistant manager can be hired for $22/hr).

    https://oxfamilibrary.openrepository.com/bitstream/10546/621551/2/cr-climate-equality-201123-en.pdf

  • Google Brain cofounder says Big Tech companies are lying about the risks of AI wiping out humanity because they want to dominate the market
  • I’m an AI researcher at one of the world’s top universities on the topic. While you are correct that no AI has demonstrated self-agency, it doesn’t mean that it won’t imitate such actions.

    These days, when people think AI, they mostly are referring to Language Models as these are what most people will interact with. A language model is trained on a corpus of documents. In the event of Large Language Models like ChatGPT, they are trained on just about any written document in existence. This includes Hollywood scripts and short stories concerning sentient AI.

    If put in the right starting conditions by a user, any language model will start to behave as if it were sentient, imitating the training data from its corpus. This could have serious consequences if not protected against.

  • InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)CO
    CodeInvasion @sh.itjust.works
    Posts 1
    Comments 61