Skip Navigation
privacy

Privacy

  • Hack Liberty Privacy and Security Resources
    forum.hackliberty.org Hack Liberty Privacy and Security Resources

    Guides Anonymous Planet [Tor] - a community contributed online guide to anonymity written for activists, journalists, scientists, lawyers, whistle-blowers, and good people being oppressed, censored, harassed anywhere Privacy Guides [Tor] - a non-profit, socially motivated website that provides info...

    Hack Liberty Privacy and Security Resources
    0
  • data was exfiltrated from a corp I did not even know had my data; then they offer to have a privacy abuser (Cloudflare) MitM credit monitoring txns. WTF?

    cross-posted from: https://links.hackliberty.org/post/2667522

    > Apparently some company I do business with shared my data with another corp without me knowing, then that corp who I did not know had my data was breached. > > WTF? > > Then the breached corp who could not competently secure the data in the first place offers victims a gratis credit monitoring services (read: offers to let yet another dodgy corp also have people’s sensitive info thus creating yet another breach point). Then the service they hired as a “benefit” to victims outsources to another corp and breach point: Cloudflare. > > WTF? > > So to be clear, the biggest privacy abuser on the web is being used to MitM a sensitive channel between a breach victim and a credit monitoring service who uses a configuration that blocks tor (thus neglecting data minimization and forcing data breach victims to reveal even more sensitive info to two more corporate actors, one of whom has proven to be untrustworthy with private info). > > I am now waiting for someone to say “smile for the camera, you’ve been punk’d!”. > > (update) > Then the lawyers representing data breach victims want you to give them your e-mail address so they can put Microsoft Outlook in the loop. WTF? The shit show of incompetence has no limit.

    2
  • Android's Find My Device

    Are there any privacy implications of enabling it?

    6
  • Dutch DPA imposes a fine of €290 million on Uber due to transfers of drivers’ data to the US
    www.autoriteitpersoonsgegevens.nl Dutch DPA imposes a fine of 290 million euro on Uber because of transfers of drivers' data to the US

    Dutch DPA imposes a fine of 290 million euro on Uber because of transfers of drivers' data to the US.

    Dutch DPA imposes a fine of 290 million euro on Uber because of transfers of drivers' data to the US

    The link is Cloudflare-free, popup-free and reachable to Tor users.

    (edit) Some interesting factors--

    from the article: > For a period of over 2 years, Uber transferred those data to Uber's headquarters in the US, without using transfer tools. Because of this, the protection of personal data was not sufficient. The Court of Justice of the EU invalidated the EU-US Privacy Shield in 2020.

    Yes but strangely & sadly the US benefits from an adequacy decision, which IIRC happened after 2020. This means the US is officially construed as having privacy protections on par with Europe. As perverse as that sounds, no doubt Uber’s lawyers will argue that point.

    > The Dutch DPA started the investigation on Uber after more than 170 French drivers complained to the French human rights interest group the Ligue des droits de l’Homme (LDH), which subsequently submitted a complaint to the French DPA.

    Wow! I wonder what triggered so many drivers to consult a human rights group. I mean, consider that Uber users and drivers are all happy to run a closed-source Google-gated app.. this is not a demographic who cares about privacy. So what triggered 170 complaints? I wonder if the Dutch DPA would have taken any action had there not been 170 cross-border complainants.

    The French DPA gives some interesting insight. Info to attempt to satisfy access requests were in English, not French, which breaks the accessibility rule. The French article gives more a feeling of not 170 proactive complaints, but maybe the human rights org complained on behalf of 170 drivers. I am quite curious from an activist point of view if 170 drivers proactively initiated a complaint.

    The fourth breach is interesting: > by not explicitly mentioning the right to data portability in their privacy statement.

    Is data portability even useful for Uber drivers in France? I’ve never used Uber (fuck Google), but I imagine drivers have feedback about how well they perform and maybe they want to port that data to an Uber competitor.. but there is no Uber competitor in France, is there? Is Lyft in France?

    0
  • Court Slams Geofence Warrants as Unconstitutional Overreach
    web.archive.org Court Slams Geofence Warrants as Unconstitutional Overreach

    Court declares geofence warrants unconstitutional, likening them to invasive mass surveillance that undermines Fourth Amendment protections.

    Court Slams Geofence Warrants as Unconstitutional Overreach

    “Categorically unconstitutional” – that is how the US Fifth Circuit Court of Appeals has ruled about the use of geofence warrants.

    The part of the Constitution that this type of warrant, that enables dragnet-style mass surveillance, violates is the Fourth Amendment, the court found.

    This amendment is meant to protect citizens from unreasonable searches or seizures – but, said the court of appeals, what geofence warrants do is allow for the opposite: “General, exploratory rummaging.”

    We obtained a copy of the ruling for you here.

    Geofencing works by essentially treating everyone who happens to be in a geographic area during a given time as a suspect, until established otherwise.

    And, the Electronic Frontier Foundation (EFF), a digital rights group, an outspoken critic that often gets involved in legal cases to argue against this method of investigation, welcomed the court’s decision, noting that people should not have to fear having their phone with them in public because that could turn them into a criminal suspect.

    The Circuit Court’s stance on geofence warrants came as it deliberated the United States v. Smith case, revolving around the police in Mississippi in 2018 resorting to obtaining this type of warrant to investigate an armed robbery and assault that took place in a post office.

    Google, which is who law enforcement turns to with these warrants most of the time, obliged, turning over data from the phones to the police, who then managed to produce two suspects, later defendants.

    But – even though it decided not to suppress the evidence, because it found the police were acting “in good faith” while geofencing was still a new phenomenon – the Fifth Circuit Court doesn’t think the warrants are inherently lawful, i.e., in compliance with the Constitution.

    One problem cited by the judges is that police access to sensitive location data collected during the process of geofencing is “highly invasive” since it can reveal a lot about a person, including their associations, and, also lets the police “‘follow’ them into private spaces,” EFF explained the court’s decision.

    Another is that the warrants never specify that they apply to a particular person, as law enforcement “have no idea who they are looking for, or whether the search will even turn up a result.”

    0
  • Biden Administration Rushes Digital ID Plans
    web.archive.org Biden Administration Rushes Digital ID Plans

    Push for digital IDs ignites debates on privacy, security, and the future of online anonymity in America.

    Biden Administration Rushes Digital ID Plans

    The Biden administration is working to expedite widespread adoption of digital IDs, including driver’s licenses, a draft executive order indicates.

    Digital IDs are a contentious concept primarily because of the concentration of – eventually – the entirety of people’s sensitive private information in centralized databases controlled by the government, and on people’s phones, “client-side.”

    That in turn brings up the issues of technical security, but also privacy, and the potential for dystopian-style mass surveillance.

    Proponents, on the other hand, like to focus on the “convenience” that such a shift from physical to digital personal documents is promised to bring.

    In the US, some states have started this process via digital driver’s licenses, and the executive order is urging (“strongly encouraging”) both federal and state authorities to accelerate this, as well as other types of digital ID.

    Where this policy seems to be converging to is coming up, at long last, with a functional way to carry out online identity verification. Namely, digital ID would be combined with biometric data obtained through facial recognition, and other forms of biometrics harvesting.

    Centralization of data – opponents say to better control it, even if that makes it less secure – is a key component of these schemes, and so the Biden executive order speaks about making it obligatory for federal agencies to join “a single government-run identity system, Login.gov,” reports say.

    It is also noted that Biden initially mentioned such an executive order was coming during his 2022 State of the Union speech, but the wording reportedly became a cause of contention.

    Now, that seems to have been resolved, and the only question for the administration is when Biden should sign the order, the same sources who saw the text, report.

    At the same time, as states are launching their own (partial) digital ID programs, an increasing number are looking for ways to introduce online age verification and are enacting laws to this effect.

    A federal-level digital ID scheme would help in these efforts to solve the “problem” of online anonymity – and in the process forever change the internet as we know it.

    10
  • VPNs Vanish from Brazil’s App Store as Internet Freedom Faces Unprecedented Clampdown
    web.archive.org VPNs Vanish from Brazil’s App Store as Internet Freedom Faces Unprecedented Clampdown

    The removal of VPN apps and the shutdown of X in Brazil signal a troubling escalation in the country's efforts to control online access and information.

    VPNs Vanish from Brazil’s App Store as Internet Freedom Faces Unprecedented Clampdown

    In Brazil, a significant upheaval in digital privacy and access to information is unfolding, as a notable number of reputable VPN services—including NordVPN, ExpressVPN, Surfshark, and VyprVPN—have vanished from the local iOS App Store. This move is widely believed to comply with Brazilian authorities’ directives, reflecting a concerning trend towards online censorship.

    This development is particularly alarming in light of the recent decision X made to shutdown its operations in the country. X terminated its operations after a protracted legal confrontation with Brazilian officials, who had accused the platform of insufficient efforts to combat disinformation, specifically its failure to block accounts spreading false information and hate speech. Despite the shutdown, X’s app is still accessible in Brazil.

    !

    The closure of X’s offices and the removal of VPNs from the App Store have spurred a significant shift toward VPN usage among Brazilians, seeking to bypass increasing online restrictions. Proton VPN reported a staggering 580% surge in new registrations recently, highlighting the growing reliance on VPNs to maintain internet freedom.

    Nevertheless, acquiring these tools has become challenging. Attempts to install these apps from the iOS App Store are met with no option to download, indicating a blockade rather than a mere removal.

    The current scenario underscores the critical importance of VPN services in safeguarding internet freedom in Brazil. As digital platforms face governmental pressures and the landscape of internet accessibility continues to evolve, the role of VPNs as tools for ensuring unrestricted access to information becomes ever more vital.

    0
  • California's Digital Plates Plan Raises Privacy Fears
    web.archive.org California's Digital Plates Plan Raises Privacy Fears

    The push to reintroduce GPS tracking in California's digital license plates raises alarms about privacy and unchecked surveillance.

    California's Digital Plates Plan Raises Privacy Fears

    California is one of the US states that have introduced digital license plates, amid opposition from a number of rights advocates.

    Now, there is a legislative effort to have GPS location tracking embedded in these, to all intents and purposes, devices attached to the car.

    Sponsored by Democrat Assemblywoman Lori Wilson, Bill 3138 is currently making its way through the state’s legislature. It refers to “License plates and registration cards: alternative devices,” and the bill has another sponsor – Reviver.

    The company was founded by Neville Boston, formerly of the Department of Motor Vehicles (DMV), and promotes itself as the first digital license plates platform. It has made its way to both this proposal, and the law the current draft builds on – AB 984 (also sponsored by Wilson) – which was signed into law two years ago.

    The problem with Reviver is that it has already had a security breach that allowed hackers to track those using the company’s digital plates in real-time. It doesn’t help, either, that the company is effectively a monopoly – the only one, the Electronic Frontier Foundation (EFF) notes, “that currently has state authorization to sell digital plates in California.”

    Meanwhile, the key problem with AB 3138, warns EFF, is that it “directly undoes the deal from 2022 and explicitly calls for location tracking in digital license plates for passenger cars.”

    The deal in question refers to the way AB 984 eventually managed to become law, signed by Governor Gavin Newsom: a provision that would have allowed for location tracking of private vehicles was removed at the time.

    But clearly, that was just a temporary move to pacify opponents, and now Wilson – and Reviver – are back to “complete” the original effort.

    EFF is urging the legislature not to approve AB 2138 and is choosing to highlight those scenarios where such GPS tracking would be detrimental to those who are ostensibly among the voters or sympathizers of Wilson and her party.

    Thus, the digital rights group speaks about those seeking abortion traveling (and being tracked, unawares) from state to state, the Immigration and Customs Enforcement (ICE) using the tech, etc.

    However, it’s difficult to see how adding another way for the authorities to track vehicles in real-time is not potentially detrimental to any person, as a form of invasive mass surveillance.

    1
  • Texas is Sued Over Attempts To End Online Anonymity

    Free speech group the Foundation for Individual Rights (FIRE) has gone to court in a bid to block Texas state age verification law, Securing Children Online through Parental Empowerment Act (SCOPE Act, HB 18).

    We obtained a copy of the complaint for you here.

    This largely Republican-backed law will take effect on September 1, starting when online platforms will be under obligation to register and verify the age of all users.

    This will apply if “more than a third” of content on the platforms is considered “harmful” or “obscene.”

    But FIRE believes this is a form of pressure to make sure sites collect biometric and ID data from adults in Texas as they access what is lawful (to them) content.

    Hence the case, Students Engaged in Advancing Texas v. Paxton, where FIRE is suing state Attorney General Ken Paxton on behalf of four plaintiffs that the group says would have their rights threatened by the SCOPE Act – unless the US District Court for the Western District of Texas issues declaratory and injunctive relief.

    In other words, FIRE wants the judges to stop the enforcement of the law, which the filing brands as unconstitutional.

    Said FIRE Chief Counsel Bob Corn-Revere: “In a misguided attempt to make the internet ‘safe’, Texas’ law treats adults like children. But even minors have First Amendment rights. Whether they’re 16 or 65, this law infringes on the rights of all Texans.”

    This is by no means a sole voice expressing disagreement with the idea that more, and more invasive online censorship and surveillance will result in better protection of children.

    Senator Rand Paul has penned an opinion piece where he goes after the Kids Online Safety Act (KOSA), which has raised privacy, censorship, and digital ID concerns among civil rights activists.

    According to Paul, what motivated those behind the legislation to come up with it is not questionable, but the actual bill falls short to the point where it “promises to be a Pandora’s box of unintended consequences.”

    The senator notes that those pushing the bill insist the goal is not to regulate content, but he believes online platforms would face unprecedented demands regarding mental health harms, like anxiety, depression, and eating disorders.

    However, Paul believes – “imposing a duty of care on internet platforms associated with mental health can only lead to one outcome: the stifling of First Amendment–protected speech” while at the same time empowering “speech police” to “silence important and diverse discussions that are essential to a free society.”

    Paul speaks in favor of making sure those protections continue to apply and suggests coming up with “clear” rules for platforms, allowing them to comply with the law.

    But KOSA, according to him, “fails to do that in almost every respect.”

    The senator sees it as (yet another) bill that is too vague for (legal) comfort, so much so that “many of its key provisions are completely undefined.”

    1
  • Google Faces Court Showdown Over Alleged Secret Chrome Data Harvesting
    web.archive.org Google Faces Court Showdown Over Alleged Secret Chrome Data Harvesting

    Google faces renewed legal scrutiny as the Ninth Circuit Court revives a class-action lawsuit alleging secret data collection from Chrome users.

    Google Faces Court Showdown Over Alleged Secret Chrome Data Harvesting

    Although a lower court had dismissed the case, the Court of Appeals for the Ninth Circuit has decided that Google will have to go to trial after all, for allegedly secretly collecting data from Chrome users, regardless of whether they chose to sync information from the browser with their Google account.

    The class action lawsuit, Calhoun v. Google LLC., accuses the tech giant of using the browser, by far the most dominant in its market, to collect browsing history, IP addresses, unique browser identifiers, and persistent cookie identifiers – all without consent.

    The case was initially filed in 2020 and then dismissed in December 2022, but now the appellate court – in a ruling signed by Judge Milan D. Smith Jr. – said that the decision failed to take into account, looking into Google’s disclosures, i.e., the privacy policy agreement, “whether a reasonable user reading them would think that he or she was consenting to the data collection.”

    The plaintiffs are certain this was in fact happening without explicit permission, and consider the way Chrome was set up to work in this context is “intentional and unlawful.”

    Google on the other hand defended its actions when the case was originally filed by saying that explicit permission happened when users accepted its privacy policy. The lower court judge, Yvonne Gonzalez Rogers, accepted this argument to dismiss the case, saying Google’s disclosure about the data collection was “adequate,” and therefore had the users’ consent.

    According to Judge Smith, despite its general policy, Google was pushing Chrome “by suggesting that certain information would not be sent to Google unless a user turned on sync.”

    Interestingly enough, Google is removing the sync option from all versions of Chrome – after iOS, this will now be the case on desktops and Android as well. All it will take is to sign into the Google account on Chrome to link the data from the browser to the account – although signing in is not mandatory, at least for now.

    A Google spokesman who commented on the decision of the court of appeals – which sent the case back to a lower court – confirmed that the change “is not related to the litigation.”

    As for the litigation – “We disagree with this ruling and are confident the facts of the case are on our side. Chrome Sync helps people use Chrome seamlessly across their different devices and has clear privacy controls,” claims Jose Castaneda.

    1
  • Big Brother Goes Digital: The Feds' Race to Integrate Mobile IDs in America
    web.archive.org Big Brother Goes Digital: The Feds' Race to Integrate Mobile IDs in America

    Critics warn the initiative could deepen surveillance concerns, as digital IDs increasingly intertwine with everyday life.

    Big Brother Goes Digital: The Feds' Race to Integrate Mobile IDs in America

    The push to develop digital ID and expand its use in the US is receiving a boost as the country’s National Institute of Standards and Technology (NIST) is launching a new project.

    NIST’s National Cybersecurity Center of Excellence (NCCoE) has teamed up with 15 large financial and state institutions, as well as tech companies, to research and develop a way of integrating Mobile Driver’s License (mDL) into financial services. But according to NIST, this is just the start and the initial focus of the program.

    The agreement represents an effort to tie in yet more areas of people’s lives in their digital ID (“customer identification program requirements” is how NIST’s announcement describes the focus of this particular initiative). These schemes are often criticized by rights advocates for their potential to be used as mass surveillance tools.

    Now NIST’s initiative brings together this institution and the American Association of Motor Vehicle Administrators (AAMVA), California Department of Motor Vehicles, Department of Homeland Security (DHS) – Science and Technology Directorate, New York State Department of Motor Vehicles, JP Morgan Chase, Wells Fargo, Microsoft.

    Among the other participants are companies specializing in digital ID IDEMIA, MATTR Limited, iLabs, SpruceID, and the OpenID Foundation (plus US Bank, and Block Inc.)

    They were chosen after submitting a response regarding their capabilities via the Federal Register, and have now received collaborative research and development agreements, known as CRADA.

    Those who are now in will work within the project’s three phases, dubbed, Define, Assemble, and Build. The first will set the scope of work along with industry participants, the second should produce teams with members from the industry, government, and academia, while the “Build” phase is to focus on “creating practical modules and prototypes to address cybersecurity challenges.”

    They will now collaborate with NCCoE to speed up the adoption of digital ID standards, a press release said, as well as best practices by developing “reference architectures, representative workflows, and implementation guides to address real-world cybersecurity, privacy, and usability challenges faced by the adoption of mDL in the financial sector.”

    NIST’s NCCoE itself is set up as a hub dealing with cybersecurity and often works with government, industry, and academia on developing precisely this type of standards.

    The call to respond to the mobile driver’s license project collaboration was first issued a year ago, in late August 2023.

    2
  • An email address you can distribute that is MS & Google dysfunctional

    This email provider gives onion email addresses:

    pflujznptk5lmuf6xwadfqy6nffykdvahfbljh7liljailjbxrgvhfid.onion

    Take care when creating the username to pull down the domain list and choose the onion domain. That address you get can then be used to receive messages. Unlike other onion email providers, this is possibly the only provider who offers addresses with no clearnet variations. So if a recipient figures out the clearnet domain it apparently cannot be used to reach you. This forces Google and MS out of the loop.

    It’s narrowly useful for some situations where you are forced to provide an email address against your will (which is increasingly a problem with European governments). Though of course there are situations where it will not work, such as if it’s a part of a procedure that requires confirmation codes.

    Warning: be wary of the fact that this ESP’s clearnet site is on Cloudflare. Just don’t use the clearnet site and keep CF out of the loop.

    0
  • Fedi design needs to evolve for privacy -- for anonymous posting

    I have lots of whistles to blow. Things where if I expose them then the report itself will be instantly attributable to me by insiders who can correlate details. That’s often worth the risks if the corporate baddy who can ID the whistle blower is in a GDPR region (they have to keep it to themselves.. cannot doxx in the EU, Brazil, or California, IIUC).

    But risk heightens when many such reports are attributable under the same handle. Defensive corps can learn more about their adversary (me) through reports against other shitty corps due to the aggregation under one handle.

    So each report should really be under a unique one-time-use handle (or no handle at all). Lemmy nodes have made it increasingly painful to create burner accounts (CAPTCHA, interviews, fussy email domain criteria, waiting for approval followed by denial). It’s understandable that unpaid charitable admins need to resist abusers.

    Couldn’t this be solved by allowing anonymous posts? The anonymous post would be untrusted and hidden from normal view. Something like Spamassassin could score it. If the score is favorable enough it could go to a moderation queue where a registered account (not just mods) could vote it up or down if the voting account has a certain reputation level, so that an anonymous msg could then possibly reach a stage of general publication.

    It could even be someone up voting their own msg. E.g. if soloActivist is has established a history of civil conduct and thus has a reputation fit for voting, soloActivist could rightfully vote on their own anonymous posts that were submitted when logged-out. The (pseudo)anonymous posts would only be attributable to soloActivist by the admin (I think).

    A spammer blasting their firehose of sewage could be mitigated by a tar pit -- one msg at a time policy, so you cannot submit an anonymous msg until SA finishes scoring the previous msg. SA could be artificially slowed down as volume increases.

    As it stands, I just don’t report a lot of things because it’s not worth the effort that the current design imposes.

    8
  • Landmark Ruling Strikes Down Warrantless Device Searches of US Citizens at Borders

    The District Court for the Eastern District of New York has ruled that the US government must reverse course on its policy of warrantless searches of US (and foreign) nationals’ electronic devices as they enter the country.

    We obtained a copy of the ruling for you here.

    This is not the only court decision on this issue, while this particular outcome, requiring that border agents obtain court-issued orders before performing such searches, concerns the district that is the court’s seat – therefore also a major port of entry, JFK International Airport.

    It was precisely at this airport that an event unfolded which set in motion a legal case. In 2022, US citizen Kurbonali Sultanov was coerced (he was told he “had no choice”) into surrendering his phone’s passport to border officers.

    Sultanov later became a defendant in a criminal case but argued that evidence from the phone should not be admitted because the device was accessed in violation of the Fourth Amendment (which protects Americans against unreasonable and warrantless searches).

    Of course, all these envisaged protections refer to US citizens, and even there prove to be sketchy in many instances. Foreign travelers (even though entering the country legally) are effectively left without any protections regarding their privacy.

    Sultanov’s argument was supported in an amicus brief filed the following year by the Knight First Amendment Institute and the Reporters Committee for Freedom of the Press, who said that the First Amendment is violated as well when law enforcement gains access to phones without a warrant since it invalidates constitutional protections of speech, freedom of the press, religion, and association.

    The New York Eastern District Court’s decision is by and large based precisely on that amicus brief. One of the arguments from it is that journalists entering the US are often forced to hand over their devices.

    The court agreed that “letting border agents freely rifle through journalists’ work product and communications whenever they cross the border would pose an intolerable risk to press freedom,” said Reporters Committee for Freedom of the Press attorney Grayson Clary in a press statement.

    Meanwhile, US Customs and Border Protection (CBP) said they were reviewing this ruling – and would not comment on what the agency said are “pending criminal cases.”

    2
  • Biden-Harris Task Force Urges Online Age Verification Digital ID Tool Development
    web.archive.org Biden-Harris Task Force Urges Online Age Verification Digital ID Tool Development

    The Biden administration's push for age verification tools raises concerns over privacy and identity verification methods.

    Biden-Harris Task Force Urges Online Age Verification Digital ID Tool Development

    The online digital ID age verification creep in the US continues from a number of directions, through “recommendations” and “studies” – essentially, the government is nudging the industry to move in the direction of implementing digital ID age verification tools.

    At this point, it is happening via various initiatives and legislation, still, without being formally mandated.

    One instance is a recommendation coming from the Biden-Harris Administration’s Kids Online Health and Safety Task Force, which is telling online service providers they should “develop and inform parents about age verification tools built into the app or available at the device level.”

    The task force is led by the Department of Health and Human Services, HHS (its Substance Abuse and Mental Health Service Administration, SAMHSA,) in what is referred to in official statements as “close partnership” with the Department of Commerce.

    This initiative is presented as an industry guidance that will ensure the safety of youths on the internet, as well as their health and privacy.

    One of the steps presented in the fact sheet refers to age verification. This is a hot-button issue, particularly among privacy and security advocates, considering the methods that would be necessary to prove somebody’s real-life identity online, and that this would have to apply to all users of a site or app.

    Yet, the current White House is now “urging” the tech industry to, among other “critical steps” inform parents about developing and building digital ID tools into either apps or devices themselves.

    The setting up of the task force and its recommendations are supposed to contribute to Biden’s “Unity Agenda,” while a report released last week talks about an “unprecedented youth mental crisis” as the reason for coming up with these recommendations for families and industry.

    The initiative, announced in May, bases its claims about the metal crisis of previously unwitnessed proportions on a report put together by the US surgeon-general and his advisory concerning social platforms.

    In addition to “sneaking in” the mention of age verification, the report also talks about the need to enact bipartisan federal legislation aimed at protecting the health, safety, and privacy of young people online.

    Another point is urging the industry to advance “action to implement age-appropriate health, safety, and privacy best practices on online platforms through federal legislation and voluntary commitments.”

    The documents’ authors from the several departments behind the task force also want platform data to become available to “independent researchers.”

    4
  • Odysee Shakes Up Video Sharing by Removing Ads
    web.archive.org Odysee Shakes Up Video Sharing by Removing Ads

    Confident in innovative monetization, Odysee claims it doesn't need ads to sustain itself.

    Odysee Shakes Up Video Sharing by Removing Ads

    Odysee, the blockchain-based video-sharing service, has announced that it will remove all advertisements from its platform effective immediately.

    The company expressed confidence in its innovative monetization programs, which are designed to support creators financially while maintaining the platform’s operational costs. “We don’t need ads to make money as a platform,” the announcement read, highlighting their commitment to creating a more open and creatively free environment.

    !

    Odysee’s move comes at a time when many media platforms increasingly rely on advertising revenue, which can lead to conflicts of interest and potential demonetization from pro-censorship activists.

    By eliminating ads, Odysee aims to set a new standard for platform independence and user-centric service.

    The announcement also pointed to the aggressive advertising tactics employed by platforms like YouTube and others, which Odysee believes detracts from the user experience. “Our approach may be considered niche or unconventional,” Odysee CEO Julian Chandra stated, emphasizing that their model aims to be sustainable financially and uphold an incorruptible user experience.

    This strategic pivot is expected to resonate well with Odysee’s user base, who have expressed displeasure with intrusive ads on other platforms. Odysee’s leadership believes that this ad-free model will not only benefit users but also ensure the platform’s sustainability and integrity in the long run.

    4
  • UK Riots Are Used To Call For End To Online Anonymity
    web.archive.org UK Riots Are Used To Call For End To Online Anonymity

    Anonymity on platforms fuels chaos, lawmaker calls for accountability and annual fees.

    UK Riots Are Used To Call For End To Online Anonymity

    In the wake of recent riots in the UK, Tobias Ellwood, former lawmaker, British Army reservist and a senior officer in the 77th Brigade, has voiced strong opinions on the role of social media in fueling public disorder. Ellwood, known for his previous support of vaccine passports and online vaccine certificate databases, emphasized the alleged negative impacts of online anonymity in a recent interview.

    The UK’s 77th Brigade, officially known as the 77th Brigade of the British Army, is a unit that focuses on non-traditional warfare, including what is known as “information warfare” or “psychological operations.” Formed in 2015, it includes personnel from various sections of the military as well as reservists with expertise in a range of disciplines such as media, marketing, and public relations.

    According to Ellwood, the swift spread of misinformation and the organization of disruptive activities are significantly enhanced by social media platforms. He proposed that no one should be able to maintain a social media account without full accountability, suggesting a nominal annual fee to remove anonymity for the use of such platforms as a measure to enforce responsibility among users.

    The lawmaker drew a stark contrast between the positive representation of Britain at the Paris Olympics and the destructive behavior of rioters back home, stating that the ease of rallying and mobilizing through social media platforms exacerbates such issues.

    “Have we advanced the rules to keep up with how social media is now dominating our society? ” Ellwood remarked, in an interview with GB News. However, the proposal has met with criticism. Benjamin Jones, Director of Case Operations at the Free Speech Union, who also appeared on GB News, argued that such measures could inadvertently harm vulnerable individuals who rely on anonymity for safety and expression. Citing the example of a young ex-Muslim lesbian woman, Jones pointed out that stripping anonymity could sever critical lifelines for those in oppressive situations.

    Jones further suggested that the focus on social media’s role in societal issues distracts from deeper, long-standing problems within the country, such as community segregation and integration challenges that predate social media.

    2
  • Russia's Biometric Boom Faces Rising Data Security Fears
    web.archive.org Russia's Biometric Boom Faces Rising Data Security Fears

    Despite fears of data breaches, Russia escalates its biometric data collection, now covering millions of citizens.

    Russia's Biometric Boom Faces Rising Data Security Fears

    The Russian government is intensifying its drive to expand the use of biometric technologies across the nation. This push comes amid growing fears over personal data security, highlighted by a surge in data breaches in recent years. In 2023 alone, RBC, a Russian research agency, reported that data breaches at Russian financial institutions escalated 3.2 times compared to 2022, with about 170 million personal records compromised.

    Despite these security challenges, the Russian state has placed a significant focus on biometrics. In 2018, authorities established the Unified Biometric System (UBS), a pivotal element in Russia’s digital infrastructure which was later designated as a state information system. Maksut Shadayev, Russia’s minister of digital development, noted that biometric data submissions have been noteworthy, with figures continuously climbing. As of now, approximately 18 million Russians have comprehensive biometric profiles, with 70 million having submitted some form of biometric data to various state or private institutions, according to Riddle Russia.

    However, many Russians are hesitant to embrace this technology. Concerns stem from not only frequent data leaks but also from a general lack of understanding of biometric technologies, which have been evolving in Russia for over a decade. Furthermore, a notable event in September 2023 saw a large number of citizens submitting petitions to halt further biometric data collection following widespread rumors about invasive data capture methods via ATMs and smartphones.

    The historical trajectory of biometric data collection in Russia reflects a broad and deep integration of such technologies in everyday transactions and interactions.

    Internationally, the government is also extending the reach of these technologies to Russians living abroad, enabling them to access financial services through biometric registration.

    Amid these developments, privacy experts and concerned citizens argue for greater scrutiny and a reconsideration of the data collection process.

    0
  • Russia's Digital Clampdown: Signal is Banned as Government Tightens Grip on Private Communication
    web.archive.org Russia's Digital Clampdown: Signal is Banned as Government Tightens Grip on Private Communication

    Users turn to VPNs and proxy servers to evade the growing crackdown on encrypted communication.

    Russia's Digital Clampdown: Signal is Banned as Government Tightens Grip on Private Communication

    Russia’s official telecommunications regulator, Roskomnadzor, disclosed on Friday that it has banned the Signal messaging app from operating in the country. This action is part of a wider pattern of widespread repression of information channels that hinder the state’s agenda, in light of the ongoing conflict in Ukraine.

    The regulator unmasked the reasoning behind the decision, citing Signal’s “violation of the requirements of Russian legislation which must be observed to prevent the messenger’s use for terrorist and extremist purposes.”

    Signal’s trademark feature is its end-to-end encryption, which fortifies the platform against government scrutiny of private conversations. Russian leaders, however, have grown wary of such platforms.

    In a series of additional restrictions imposed on media freedom in Russia, the authorities have cornered numerous independent outlets delivering news in Russian that dare to critique the Kremlin. Access to major social media platforms such as X has also been barred. Meta’s Facebook and Instagram have not been spared either.

    In response to the Roskomnadzor ban on the Signal messaging app in Russia, users might seek ways to circumvent the restriction and continue using the app:

    Virtual Private Networks (VPNs): VPNs can help users disguise their internet traffic and appear as if they are accessing the internet from a different country. This can bypass geographic restrictions imposed by a government.

    Proxy Servers: Similar to VPNs, proxy servers allow users to access the internet from a different IP address, potentially bypassing censorship imposed on specific services or apps.

    Using Signal over Bridges: Signal provides a feature called “Signal Proxy” designed to help users in censored areas access the service. Users can connect via a proxy URL provided by someone outside the censored region, helping them bypass blocks.

    Alternative App Stores: Sometimes, apps banned in official app stores might be available in alternative app stores or via direct downloads from trusted sources on the internet.

    Encrypted DNS: Using encrypted DNS services can sometimes help bypass blocks that rely on DNS filtering, allowing users to resolve domain names that might be restricted.

    Each of these methods has its risks and benefits, and the effectiveness can vary based on the specific technical measures implemented by local authorities to enforce the ban. Users should also be aware of the legal risks involved in circumventing government-imposed censorship.

    1
  • Privacy Guides Adds New "Hardware Recommendations" Section

    After almost 2 years, Privacy Guides has added a new Hardware Recommendations section to their website.

    Thanks to Daniel Nathan Gray and others for implementing this new hardware guide

    0
  • Presenting our DIY Dead Man Switch @ DEF CON 32
    www.buskill.in BusKill goes to DEF CON 32 - BusKill

    Join BusKill at DEF CON 32 for our presentation titled "Open Hardware Design for BusKill Cord" in the Demo Lab

    BusKill goes to DEF CON 32 - BusKill

    We're happy to announce that BusKill is presenting at DEF CON 32.

    What: Open Hardware Design for BusKill Cord When: 2024-08-10 12:00 - 13:45 Where: W303 – Third Floor – LVCC West Hall

    | [!BusKill goes to DEF CON 32 (Engage)](https://www.buskill.in/defcon32/) | |:--:| | BusKill is presenting at DEF CON 32 |

    via @Goldfishlaser@lemmy.ml

    What is BusKill?

    BusKill is a laptop kill-cord. It's a USB cable with a magnetic breakaway that you attach to your body and connect to your computer.

    | [!What is BusKill? (Explainer Video)](https://www.buskill.in/#demo) | |:--:| | Watch the BusKill Explainer Video for more info youtube.com/v/qPwyoD_cQR4 |

    If the connection between you to your computer is severed, then your device will lock, shutdown, or shred its encryption keys -- thus keeping your encrypted data safe from thieves that steal your device.

    What is DEF CON?

    DEF CON is a yearly hacker conference in Las Vegas, USA.

    | [!DEF CON Documentary](https://www.buskill.in/defcon32/) | |:--:| | Watch the DEF CON Documentary for more info youtube.com/watch?v=3ctQOmjQyYg |

    What is BusKill presenting at DEF CON?

    I (goldfishlaser) will be presenting Open Hardware Design for BusKill Cord in a Demo Lab at DEF CON 32.

    What: Open Hardware Design for BusKill Cord When: Sat Aug 10 12PM – 1:45PM Where: W303 – Third Floor – LVCC West Hall

    Who: Melanie Allen (goldfishlaser) More info

    Talk Description

    BusKill is a Dead Man Switch triggered when a magnetic breakaway is tripped, severing a USB connection. I’ve written OpenSCAD code that creates a 3D printable file for plastic parts needed to create the magnetic breakaway. Should anyone need to adjust this design for variations of components, the code is parameterized allowing for easy customization. To assemble a BusKill Dead Man Switch cord you will need:

    1. a usb-a extension cord,
    2. a usb hard drive capable of being attached to a carabiner,
    3. a carabiner,
    4. the plastic pieces in this file,
    5. a usb female port,
    6. a usb male,
    7. 4 magnets,
    8. 4 pogo pins,
    9. 4 pogo receptors,
    10. wire,
    11. 8 screws,
    12. and BusKill software.

    | [!Image of the Golden BusKill decoupler with the case off](https://www.buskill.in/defcon32/) | |:--:| | Golden DIY BusKill Print |

    Full BOM, glossary, and assembly instructions are included in the github repository. The room holds approx. 70 attendees seated. I’ll be delivering 3 x 30 min presentations – with some tailoring to what sort of audience I get each time.

    Meet Me @ DEF CON

    If you'd like to find me and chat, I'm also planning to attend:

    • ATL Meetup (DCG Atlanta Friday: 16:00 – 19:00 \| 236),
    • Hacker Kareoke (Friday and Sat 20:00-21:00 \| 222),
    • Goth Night (Friday: 21:00 – 02:00 \| 322-324),
    • QueerCon Mixer (Saturday: 16:00-18:00 \| Chillout 2),
    • EFF Trivia (Saturday: 17:30-21:30 \| 307-308), and
    • Jack Rysider’s Masquerade (Saturday: 21:00 – 01:00 \| 325-327)

    I hope to print many fun trinkets for my new friends, including some BusKill keychains.

    | [!Image shows a collection of 3D-printed bottle openers and whistles that say "BusKill"](https://www.buskill.in/defcon32/) | |:--:| | Come to my presentation @ DEF CON for some free BusKill swag |

    By attending DEF CON, I hope to make connections and find collaborators. I hope during the demo labs to find people who will bring fresh ideas to the project to make it more effective.

    1
  • Meta's Creepy Skin Deep "Security" Idea
    web.archive.org Meta's Creepy Skin Deep "Security" Idea

    Meta's invasive biometric security measures as a disturbing overreach into personal privacy.

    Meta's Creepy Skin Deep "Security" Idea

    While authentic videos coming out during this US campaign season show some of the leading actors proving with their behavior that truth can indeed be stranger than fiction (in this case, than any deepfake) – Big Tech continues with its obsession with deepfake technology as a serious threat.

    A threat of such proportions, as far as the likes of Meta are concerned – or are pressured to be concerned – that it calls for some fairly drastic measures.

    Take, for example, a new patent application filed by the giant, detailing a method of authenticating users by combining vocalization – “and skin vibration.”

    … and what? The filing reveals that this is the kind of biometric data which uses not only a person’s voice but also how speaking causes that person’s skin tissue to vibrate.

    This level of “creepiness” in biometric information collection and use is explained as a need to solve security problems that come with activating systems only with one’s voice. That’s because, Meta says, voice can be “generated or impersonated.”

    But, say some experts, if skin vibration is “a second factor” – then that protects from deepfakes.

    Meta doesn’t state if it thinks that what’s true of voice also applies to fingerprints – but the “skin vibration authentication” is supposed to replace both fingerprints and passwords in device activation. Needless to say, Meta insists that “user experience” is improved by all this.

    Meta talks about things like smart glasses and mixed reality headsets as use cases where the technology from this new patent can be applied – yet that’s a whole lot of very invasive biometrics-based authentication for a very small market.

    For now, those are some of the examples, with built-in “vibration measurement assembly” that makes this method possible, but once there, the tech could be used in almost any type of device – and for different purposes.

    0
  • Court Deems Google Location Data Fair Game
    web.archive.org Fourth Amendment Under Fire: Court Deems Google Location Data Fair Game

    The court determined that users have no reasonable expectation of privacy in voluntarily shared location data.

    Fourth Amendment Under Fire: Court Deems Google Location Data Fair Game

    The US Court of Appeals for the Fourth Circuit published its opinion in the United States v. Chatrie case, which concerns alleged violations of the Fourth Amendment.

    We obtained a copy of the opinion for you here.

    This constitutional amendment is supposed to protect against unreasonable (including warrantless) searches.

    At the center is Google, and how the giant’s collection of users’ locations, then accessed by others to locate a person, might constitute a violation.

    In a 2-1 vote the appellate court has decided that accessing Google location data is not a search.

    A court that originally dealt with the case, where location data was used to identify a bank robber. The warrant was based on the mass and indiscriminate surveillance method known as “geofencing.”

    In 2022, that court found that data collected and made available to law enforcement does mean a search has been performed in contravention of the Fourth Amendment.

    This was viewed as unconstitutional, and the court was not satisfied that (in this case) location information collected this way passed legal muster.

    Two years on, Circuit Court judges Jay Richardson and Harvie Wilkinson concluded the search of location data was – no search, at least not in their understanding of the Fourth Amendment. The dissenting opinion came from Judge James Wynn.

    Judge Richardson states that the government accessing Google location history of the appellant (defendant in the appeals proceedings) Okello Chatrie “did not have a reasonable expectation of privacy” during the two hours he was “geofenced” by Google – plus, Chatrie “volunteered” it in the first place (by using Google and its location feature.)

    The Circuit Court, which extensively cited the 2018 Carpenter v. United States, also seems to go into the meaning of privacy, and possibly try to redefine it. Namely, – do “only” two hours of a person’s life (monitored by Google and then accessed by law enforcement) count? Not really, as the majority opinion put it:

    “All the government had was an ‘individual trip viewed in isolation,” which, standing alone, was not enough to enable deductions about ‘what (Chatrie) does repeatedly, what he does not do, and what he does ensemble.”

    And – “Chatrie voluntarily exposed his location information to Google by opting in to Location History.”

    Apart from future implications regarding geofencing, there’s a life hack hidden in this ruling as well: just to be on the safe side, never opt in to Google’s surveillance schemes.

    0
  • "Nearly All" AT&T Customers are Affected by Massive Data Breach; What To Do Next

    AT&T is facing severe criticism following a substantial data breach where hackers accessed the call records of “NEARLY ALL” its mobile subscribers, totaling approximately 109 million individuals.

    This doesn’t just affect AT&T customers, it affects everyone those customers have interacted with.

    In a statement to Reclaim The Net, the telecommunications giant confirmed that the breach occurred between April 14 and April 25, 2024, involving its Snowflake storage. Snowflake, a provider that facilitates large-scale data warehousing and analytics in the cloud, is now under scrutiny for security lapses in the wake of multiple breaches facilitated by stolen credentials.

    Recently, the security firm Mandiant identified a financially motivated hacker group, known as “UNC5537” targeting Snowflake users. This has led to a series of data thefts, prompting Snowflake to implement stricter security measures, including mandatory multi-factor authentication for its administrators.

    The stolen data includes call and text metadata from May 1 to October 31, 2022, and a specific breach on January 2, 2023. This metadata encompasses telephone numbers, interaction counts, and aggregate call durations, affecting not only AT&T’s direct customers but also those of various mobile virtual network operators (MVNOs).

    AT&T took immediate action upon discovering the breach, engaging with cybersecurity experts and contacting the FBI. According to an official statement, the FBI, along with the Department of Justice (DOJ), evaluated the breach’s implications on national security and public safety, which led to delays in public disclosure sanctioned on May 9 and June 5, 2024. The FBI emphasized its role in assisting victims of cyberattacks and the importance of early communication with law enforcement in such incidents.

    “We have taken steps to close off the illegal access point,” AT&T continued in its statement. “We are working with law enforcement in its efforts to arrest those involved in the incident. We understand that at least one person has been apprehended.” Customers should take several proactive steps to protect their personal information and reduce potential risks: Be Wary of Phishing Attempts

    Hackers may attempt to use stolen data to craft convincing phishing emails or texts. Customers should be cautious about unsolicited communications asking for personal information or urging them to click on suspicious links. Use MFA (Multi-Factor Authentication)

    While passwords were not compromised in this breach, enabling MFA where available can enhance security on all digital accounts. Avoid using text messages as a form of account verification. This is when a company sends you a code by text message that you have to use to access your account. It’s much safer to use a 2-factor authentication app. Avoid Using Standard Phone Calls and SMS Text Messages as Much as Possible

    Phone carriers, by virtue of their central role in facilitating communications, inherently collect and store vast amounts of metadata related to phone calls and text messages. This metadata, which includes details such as call times, durations, and the numbers involved, can be highly sensitive. Despite its non-content nature, metadata can reveal intricate details about a person’s life, habits, and social networks. Here are some reasons why phone carriers are often more vulnerable to metadata leaks:

    Large Data Stores: Phone carriers manage enormous volumes of data daily. Each call or text generates metadata that is logged and stored. The sheer volume of this data makes it a significant target for hackers, and managing its security can be challenging.

    Regulatory Requirements: Carriers are often required by law to retain metadata for certain periods for lawful intercept capabilities and other regulatory reasons. This obligation to store data can increase the risk of breaches, as older, possibly less secure systems may be used for storage.

    Complex Systems and Integration: The infrastructure of telecom companies is complex and often integrated with various legacy systems and third-party services. Each integration point can introduce vulnerabilities, potentially offering hackers multiple entry points to access and extract data.

    Insufficient Encryption Practices: While the content of communications might be encrypted, the metadata often is not. This oversight can leave sensitive information exposed to anyone who gains unauthorized access to the system.

    High Value for Surveillance and Advertising: Metadata is extremely valuable for surveillance purposes, as well as for targeted advertising. This makes it a lucrative target for unauthorized actors, including state-sponsored groups and cybercriminals looking to monetize the data.

    Delayed Disclosure: Carriers might delay disclosing data breaches due to ongoing investigations or national security implications, as seen in the AT&T breach. This delay can exacerbate the problem, increasing the window during which stolen data can be misused.

    Underestimation of Metadata Sensitivity: There is often a misconception that metadata is not as sensitive as direct communication content. This misunderstanding can lead to less rigorous security measures being applied to protect this type of data.

    Economic and Technical Resources: Despite having significant resources, phone carriers may prioritize cost-saving measures over the implementation of state-of-the-art security solutions. Additionally, updating and securing sprawling networks can be technically challenging and expensive.

    Use end-to-end encrypted apps to communicate instead and encourage family and friends to do the same.

    Using apps that offer end-to-end encryption (E2EE) is crucial for maintaining privacy and security, especially in the wake of breaches like the one experienced by AT&T, where call data was exposed. Here’s why E2EE apps are a better choice:

    Enhanced Privacy Protection: End-to-end encryption ensures that messages, calls, and files are encrypted on the sender’s device and only decrypted on the recipient’s device. This means that no one in between, not even the service providers or potential interceptors, can read or listen to the content. This is crucial when the metadata (like call logs and contact numbers) is exposed, as the content of the communications remains secure.

    Security Against Interception: E2EE is particularly important for protecting against potential eavesdropping. Even if a hacker can access transmission lines or servers, they cannot decrypt the encrypted data without the unique keys held only by the sender and receiver.

    Prevention of Third-Party Access: In cases where service providers are subpoenaed for user data, they cannot hand over what they do not have access to. E2EE means the service provider does not have the decryption keys and therefore cannot access the content of the communications, offering an additional layer of legal protection.

    Reduced Risk of Data Breaches: If a data breach occurs and encrypted data is stolen, the information remains protected because it is unreadable without the decryption keys. This significantly reduces the risk associated with data theft.

    Trust and Compliance: Using E2EE can help companies build trust with their customers by showing a commitment to privacy and security. It can also help in complying with privacy regulations and standards, which increasingly mandate the protection of personal data.

    Mitigation of Damage from Breaches: While encryption does not prevent data from being stolen, it devalues the data, making it useless to the thief. This is particularly important in incidents where sensitive information is at risk of being exposed.

    Given these advantages, users are strongly advised to prefer communication apps and services that offer robust end-to-end encryption. This not only protects the content of their communications but also serves as a critical defense mechanism in today’s digital and often vulnerable cyber landscape.

    AT&T has provided a FAQ page where customers can find out if their data was involved in the breach. It’s important for customers to use these resources to assess their exposure.

    3
  • Privacy on the Line: Tony Blair Institute Champions Digital ID
    web.archive.org Privacy on the Line: Tony Blair Institute Champions Digital ID

    Former Indian minister downplays privacy concerns while advocating for digital identity and payments infrastructure.

    Privacy on the Line: Tony Blair Institute Champions Digital ID

    Tony Blair Institute’s Future of Britain Conference 2024 (co-organized with My Life My Say) seems to have gone out of its way to cover (with a positive spin) pretty much all the key contested by rights advocates’ plans and schemes, digital ID being inevitably among those.

    One of the panelists, former Indian Minister of State for Electronics, Information Technology, Skill Development and Entrepreneurship Rajeev Chandrasekhar was there to praise a major set of goals aimed at ushering in digital ID and payments by the end of the decade.

    The “umbrella” for achieving that is what’s known as the digital public infrastructure (DPI) – a buzzword shared by the UN, the EU, the WEF, and Bill Gates’ Foundation.

    At the same time, Rajeev downplayed privacy fears associated with digital ID and revealed that his country was working with others to push the initiative.

    The host asserted that introducing digital identity is “so important for the transformation of a country” (he didn’t specify in which direction this transformation is supposed to go).

    But Chandrasekhar made sure to talk about the positives, such as that the system, Aadhaar, which at this time provides 1.2 billion Indians with digital identities, is helping improve on what was previously seen as his county’s “dysfunctional governance.” And he appears to suggest that the notion once in place in Asia – that this type of scheme is only good for countries like China but not democracies – is shifting.

    The perception (or fact-based belief) that aggressive digitization and privacy are ultimately incompatible is “a false binary,” he said.

    And despite the many instances of Aadhaar being the target of data breaches, hacks, and the ensuing concerns for the safety of the people’s personal data, Chandrasekhar sought to downplay these dangers – by citing which legislative tools are in place that are supposed to prevent them.

    The former government official said that in India privacy and data protection are fundamental and constitutional rights and that the country has a data protection law. And this, it appears, is Chandrasekhar’s argument that privacy and policies covered by the DPI and digital ID are actually safe.

    Chandrasekhar also notes that “if you go down and deep dig a little deep into this, you can figure out solutions that can both protect the individual’s rights to information privacy as well as grow an innovation ecosystem.”

    But he does reveal whether India, or others that he is aware of, are actually “digging a little deeper.”

    0
  • London’s Privacy Nightmare: Facial Recognition Gets a Boost at Tony Blair Conference
    web.archive.org London’s Privacy Nightmare: Facial Recognition Gets a Boost at Tony Blair Conference

    Civil rights groups warn of privacy erosion as London's Metropolitan Police surveillance opportunities.

    London’s Privacy Nightmare: Facial Recognition Gets a Boost at Tony Blair Conference

    Deployment of facial recognition has received another “endorsement” in the UK, during an event co-organized by the Tony Blair Institute for Global Change – including by London’s Metropolitan Police Director of Intelligence Lindsey Chiswick.

    The Future of Britain Conference 2024 is co-hosted by My Life My Say charity with links to UK authorities, and the US embassy in London.

    Despite civil rights groups like Big Brother Watch consistently warning against turning the UK’s high streets into a facial recognition-powered privacy and security nightmare, Chiswick was upbeat about using this technology.

    She shared that the results so far have been “really good,” and asserted that this is happening as the Met are “conscious” of privacy concerns, which is far from any pledge that those concerns are being properly addressed – the police are simply aware of them.

    Perhaps in line with that attitude, she conveniently left out the fact that the system is essentially a dragnet, scanning the faces of hundreds of thousands of law-abiding citizens in search of a very small number of criminal offenders – sometimes just to make a single arrest.

    But while Chiswick directs citizens to the Met website where they can see “transparency” in action – explanations of the legal mandate, and “all sorts of stuff” – she insists that this transparency is much better than what private companies who use the same tech offer.

    The idea seems to reassure the public not by stating – that we respect your privacy and rights and are open about how – but, “we’re less bad than the other guys.”

    According to Chiswick, facial recognition opens up a number of “opportunities” (in the mass surveillance effort) – such as crime pattern recognition, traffic management, forensic analysis, body body-worn video analysis.

    This high-ranking Met official came across as a major proponent and/or apologist of the controversial tech, describing it as a “game changer” that has already made a “huge difference” in how London is policed.

    Chiswick goes into the various techniques used to try to match images (taken by surveillance equipment, and from other sources) – one of them being the most contentious – live facial recognition.

    She promises that the “bespoke watch list” against which live camera feed images are compared is “not massive.”

    “That’s being created over time. So it’s bespoke to the intelligence case that sits behind the deployment,” Chiswick said. “If an offender walks past the camera and there’s an alert, that’s a potential match.”

    1
  • Tony Blair Institute Hosts Controversial Panel on Health Data Commercialization
    web.archive.org Tony Blair Institute Hosts Controversial Panel on Health Data Commercialization

    Speakers push for controversial health data commercialization and surveillance measures.

    Tony Blair Institute Hosts Controversial Panel on Health Data Commercialization

    The Tony Blair Institute for Global Change and My Life My Say charity co-hosted the Future of Britain Conference 2024 and heard Blair organization’s director of health policy, Charlotte Refsum, and other panelists speak in favor of more commercialization and surveillance of health data.

    This was one of several controversial issues covered during the event, along two main lines – more surveillance of various types, and combating “disinformation.”

    Blair Institute’s choice of organizing partner is telling, as well, since My Life My Say, which focuses on getting young people out to vote, lists the UK Cabinet Office and US embassy in London, as well as the mayor of London, as its past partners or backers.

    Regarding health data, Refsum urged the creation of digital health records for all citizens, as well as a private commercial entity dubbed, “national data trust” – that would be tasked with commercializing access to sensitive health data in the country, and generate revenue in that way.

    Blair himself was less straightforward, as a politician does, but appears to be pushing for digital health records and national data trust. But he appeared somewhat evasive when Refsum asked him about a digital health record and a national data trust, speaking about the benefits of technology in general, in terms of health.

    Wellcome, another charitable foundation with ties to the UK government – the Department of Health and Social Care – would like to see the National Health Service (NHS) “integrate all the data” it has to achieve a “learning population health system.”

    This is according to Wellcome’s Dr. John-Arne Rottingen who is also a fan of “faster intelligence” and reaching this goal by feeding massive amounts of data into the schemes.

    Rottingen, who is Norwegian, spoke about what he considers a positive example of Scandinavian countries that have already linked access to health data “across the full population.”

    In contrast to his learning population health system, is the current state of affairs where this information is “locked in different parts of the system,” noted Rottingen.

    He urged researchers in the UK to enter public-private partnerships in order to come up with “insights” that are supposed to provide the driving force for a future “sustainable healthcare system.”

    0
  • Europol Seeks to Break Mobile Roaming Encryption
    web.archive.org Europol Seeks to Break Mobile Roaming Encryption

    Europol's push for easier access to encrypted data raises concerns over privacy and potential overreach.

    Europol Seeks to Break Mobile Roaming Encryption

    EU’s law enforcement agency Europol is another major entity that is setting its sights on breaking encryption.

    This time, it’s about home routing and mobile encryption, and the justification is a well-known one: encryption supposedly stands in the way of the ability of law enforcement to investigate.

    The overall rationale is that police and other agencies face serious challenges in doing their job (an argument repeatedly proven as false) and that destroying the internet’s currently best available security feature for all users – encryption – is the way to solve the problem.

    Europol’s recent paper treats home routing not as a useful security feature, but, as “a serious challenge for lawful interception.” Home routing works by encrypting data from a phone through the home network while roaming.

    We obtained a copy of the paper for you here.

    Europol appears to want to operate on trust: the agency “swears” it needs access to this protected traffic simply to catch criminals. And if the feature was gone, then ISPs and Europol could have smooth access to traffic.

    But if the past decade or so has taught law-abiding citizens anything, it is how, given the right tools, massive government and transnational organizations “seamlessly” slip from lawful to unlawful conduct, and secretive mass surveillance.

    Not to mention that tampering with encryption – in this instance available in home routing as a part of the privacy-enhancing technologies (PET) – in security and privacy terms, means opening a can of worms.

    It turns out, as ever, that agencies like Europol actually do have other mechanisms to go after criminals, some more controversial than others: one is “voluntary cooperation” by providers outside the EU (in which case Europol has to disclose information about “persons of interest” using foreign phone cards with other countries) as well as issuing an EIO – European Investigation Order.

    But that barely compares to breaking encryption, in terms of setting up the infrastructure for effective mass surveillance. Europol’s complaint about the available procedures naturally doesn’t mention any of that – instead, they talk about “slow EIO replies” that hinder “urgent investigations.”

    Europol presents two solutions to the home routing encryption “problem”: One, disable PET in home routing. The second is a cross-border mechanism inside the EU where “interception requests are quickly processed by service providers.”

    0
  • EU Commission Urges Digital ID, E-Health Records, and Touts "Anti-Disinformation" Efforts in Digital Decade Report
    web.archive.org EU Commission Urges Digital ID, E-Health Records, and Touts "Anti-Disinformation" Efforts in Digital Decade Report

    Critics warn that increased regulation under the guise of combating "disinformation" may stifle free speech.

    EU Commission Urges Digital ID, E-Health Records, and Touts "Anti-Disinformation" Efforts in Digital Decade Report

    Earlier this week the EU Commission (EC) published its second report on what it calls “the state of the digital decade,” urging member countries to step up the push to increase access and incentivize the use of digital ID and electronic health records.

    At the same time, the bloc is satisfied with how the crackdown on “disinformation,” “online harms,” and the like is progressing.

    In a press release, the EC said the report was done to assess the progress made in reaching the objectives contained in the Digital Decade Policy Program (DDPP), targeting 2030 as the year of completion.

    EU members have now for the first time contributed to the document with analyses of their national “Digital Decade strategic roadmaps.” And, here, the EC is not exactly satisfied: the members’ efforts will not meet the EU’s “level of ambition” if things continue to develop as they currently are, the document warns.

    In that vein, while the report is generally upbeat on the uptake of digital ID (eID schemes) and the use of e-Health records, its authors point out that there are “still significant differences among countries” in terms of eID adoption.

    To remedy member countries falling short on these issues, it is recommended that they push for increased access to eID and e-Health records in order to meet the objectives set for 2030.

    The EU wants to see both these schemes available to 100% of citizens and businesses by that date – and reveals that eID is at this point available to 93% of citizens across the 27 of the bloc’s countries, “despite uneven take-up.”

    Still, the EC’s report shows that policymakers in Brussels are optimistic that the EU digital ID Wallet will “incentivize” eID use.

    And, the document’s authors are happy with the way the controversial Digital Services Act (DSA) is getting enforced. Critics, however, believe it is there to facilitate crackdowns on speech – under the guise of combating “disinformation,” etc.

    The EU calls this, “strengthening the protection against online harms and disinformation,” while also mentioning that it is launching investigations (into online platforms) to make sure DSA is enforced.

    And in order to reinforce the message that DSA is needed as a force for good, the report asserts that “online risks are on the rise and disinformation has been identified as one of the most destabilizing factors for our societies, requiring comprehensive, coordinated action across borders and actors.”

    0
  • Rise in Stolen Singaporean ID Data Shows The Dangers of Digital ID
    web.archive.org Rise in Stolen Singaporean ID Data Shows The Dangers of Digital ID

    Sensitive biometric data of Singaporeans sold on dark web amid significant increase in cybercriminal activity.

    Rise in Stolen Singaporean ID Data Shows The Dangers of Digital ID

    A recent advisory published by the Resecurity cybersecurity vendor exposes a trend now developing on the dark web – more and more stolen biometrics-based data is ending up in this corner of the internet.

    These revelations, describing the increase in activity of this type as “significant,” highlight the case of Singapore, including its SingPass scheme.

    At the same time, they confirm that the fears of the digital ID and age verification push sooner or later turning into a privacy nightmare.

    In Singapore, every citizen and resident has a SingPass (Singapore Personal Access) digital ID account, which is touted by the authorities in the city-state as their “trusted digital identity” – not to mention a “convenient” one.

    Blackhat hackers, however, beg to disagree, and it’s hard to imagine that digital ID holders affected by identity theft think of the scheme as in any way “convenient.”

    Security researchers say that overall, year-on-year, as many as 230 percent more “vendors” are now selling stolen personal information that often contains facial recognition data, fingerprints, and other biometrics belonging to Singaporeans.

    A majority of this data has been up for sale on the XSS dark web forum, according to the same source.

    In 2024 thus far, this type of activity peaked in April, following a rise in data breaches where cybercriminals targeted a number of online databases that store this information.

    Stolen citizens’ identities are then used for a variety of criminal activities, including fraud, scams, and the creation of deepfakes. But once this kind of floodgate opens, exposing particularly sensitive data, spies and various governments are never far behind the common criminals in exploiting the breaches.

    Other than supposedly being “easy and secure,” SingPass gives access to more than 1,700 government and private sector services in Singapore, both online, and in person.

    But Resecurity said that more than 2,377 of these accounts were compromised last month alone, with the firm saying the holders of those have been notified of this discovery.

    However, the firm’s advisory noted that in many cases online platforms that suffer data breaches do not disclose these incidents, which means that citizens and residents in Singapore whose identities have been stolen are not even aware of this.

    0
  • Meta Defends Charging Users For Privacy
    web.archive.org Meta Defends Charging Users For Privacy

    Critics argue the proposal violates users' rights to consent and data privacy.

    Meta Defends Charging Users For Privacy

    Meta last fall came up with an idea of how to comply with the EU’s Digital Markets Act (DMA) (not to be confused with the Digital Services Act (DSA) – considered by critics to be a “censorship law”).

    Namely, Meta announced at the time that in order to adhere to DMA, and allow an ad-free “experience” in the EU (but also in the European Economic Area, EEA, and Switzerland) Facebook and Instagram would offer subscriptions to privacy-minded users.

    The problem with what Meta calls “a free, inclusive, ad-supported” internet is not just that ads are annoying – it’s that people actually do pay what turns out to be a pretty hefty price, i.e., with their sensitive personal data monetized by the giant for “personalized,” aka, targeted advertising.

    But this “opt-out” (for a fee), or alternatively consent to data collection in order to continue to use the platforms (“for free”) idea didn’t go over well in the EU, for reasons presented by EU’s Commission is its typical barely-human-readable fashion.

    As per the EC, the reasons are the following: the proposed Meta scheme “does not allow users to opt for a service that uses less of their personal data but is otherwise equivalent to the ‘personalized ads-based service,” and, “does not allow users to exercise their right to freely consent to the combination of their personal data.”

    The enthusiasm of EU Commissioner for Internal Market Thierry Breton for the EC findings published earlier this week – given his previous track record – does tempt onlookers to wonder if this decision really has to do with protecting competitiveness and users in Europe – or is yet another form of pressuring Meta, at a sensitive (political) time.

    Whatever the case may be, also in general EU fashion, the Commission’s findings are only the beginning of a lengthy process that is expected to last for months, as Meta looks into the findings and tries to counter their arguments to defend its position.

    What we know, thanks to a spokesperson’s statement, is that Meta will try to prevail in this controversy, among other things, by citing EU’s top court, the Court of Justice of the European Union (CJEU), as in 2023 “endorsing” its proposed scheme, which the giant asserts does comply with DMA.

    If this fight is eventually lost, Meta can look forward to parting with 10 percent of global turnover, and fines up to 20 percent for repeat infringement.

    2
  • Multiple nations enact mysterious export controls on quantum computers
    web.archive.org Multiple nations enact mysterious export controls on quantum computers

    Identical wording placing limits on the export of quantum computers has appeared in regulations across the globe. There doesn't seem to be any scientific reason for the controls, and all can be traced to secret international discussions

    Multiple nations enact mysterious export controls on quantum computers

    Secret international discussions have resulted in governments across the world imposing identical export controls on quantum computers, while refusing to disclose the scientific rationale behind the regulations. Although quantum computers theoretically have the potential to threaten national security by breaking encryption techniques, even the most advanced quantum computers currently in public existence are too small and too error-prone to achieve this, rendering the bans seemingly pointless.

    The UK is one of the countries that has prohibited the export of quantum computers with 34 or more quantum bits, or qubits, and error rates below a certain threshold. The intention seems to be to restrict machines of a certain capability, but the UK government hasn’t explicitly said this. A New Scientist freedom of information request for a rationale behind these numbers was turned down on the grounds of national security.

    France has also introduced export controls with the same specifications on qubit numbers and error rates, as has Spain and the Netherlands. Identical limits across European states might point to a European Union regulation, but that isn’t the case. A European Commission spokesperson told New Scientist that EU members are free to adopt national measures, rather than bloc-wide ones, for export restrictions. “Recent controls on quantum computers by Spain and France are examples of such national measures,” they said. They declined to explain why the figures in various EU export bans matched exactly, if these decisions had been reached independently.

    A spokesperson for the French Embassy in London told New Scientist that the limit was set at a level “likely to represent a cyber risk”. They said that the controls were the same in France, the UK, the Netherlands and Spain because of “multilateral negotiations conducted over several years under the Wassenaar Arrangement”.

    “The limits chosen are based on scientific analyses of the performance of quantum computers,” the spokesperson told New Scientist. But when asked for clarification on who performed the analysis or whether it would be publicly released, the spokesperson declined to comment further.

    The Wassenaar Arrangement is a system adhered to by 42 participating states, including EU members, the UK, the US, Canada, Russia, Australia, New Zealand and Switzerland, that sets controls on the export of goods that could have military applications, known as dual-use technologies. Canada has also implemented identical wording on 34 qubits into a quantum computer export ban.

    New Scientist wrote to dozens of Wassenaar states asking about the existence of research on the level of quantum computer that would be dangerous to export, whether that research has been published and who carried it out. Only a few responded.

    “We are closely observing the introduction of national controls by other states for certain technologies,” says a spokesperson for the Swiss Federal Department of Economic Affairs, Education and Research. “However, existing mechanisms can already be used to prevent in specific cases exports of such technologies.”

    “We are obviously closely following Wassenaar discussions on the exact technical control parameters relating to quantum,” says Milan Godin, a Belgium adviser to the EU’s Working Party on Dual-Use Goods. Belgium doesn’t appear to have implemented its own export restrictions yet, but Godin says that quantum computers are a dual-use technology due to their potential to crack commercial or government encryption, as well as the possibility that their speed will eventually allow militaries to make faster and better plans – including in relation to nuclear missile strikes.

    A spokesperson for the German Federal Office for Economic Affairs and Export Control confirmed that quantum computer export controls would be the result of negotiations under the Wassenaar Arrangement, although Germany also doesn’t appear to have implemented any restrictions. “These negotiations are confidential, unfortunately we cannot share any details or information about the considerations of this control,” says the spokesperson.

    Christopher Monroe, who co-founded quantum computer company IonQ, says people in the industry have noticed the identical bans and have been discussing their criteria, but he has no information on where they have come from.

    “I have no idea who determined the logic behind these numbers,” he says, but it may have something to do with the threshold for simulating a quantum computer on an ordinary computer. This becomes exponentially harder as the number of qubits rises, so Monroe believes that the rationale behind the ban could be to restrict quantum computers that are now too advanced to be simulated, even though such devices have no practical applications.

    “The fallacy there is that just because you cannot simulate what the quantum computer is doing doesn’t make it useful. And by severely limiting research to progress in this grey area, it will surely stifle innovation,” he says.

    10
  • How your FedEx driver is helping cops spy on YOU

    FedEx is using AI-powered cameras installed on its trucks to help aid police investigations, a new report has revealed.

    The popular postal firm has partnered with a $4billion surveillance startup based in Georgia called Flock Safety, Forbes reported.

    Flock specializes in automated license plate recognition and video surveillance, and already has a fleet of around 40,000 cameras spanning 4,000 cities across 40 states.

    FedEx has teamed up with the company to monitor its facilities across the US, but under the deal it is also sharing its Flock surveillance feeds with law enforcement. And it is believed to be one of four multi-billion dollar private companies with this arrangement.

    It's led critics to liken the move to rolling out a mass surveillance network - as it emerged that some local police forces are also sharing their Flock feeds with FedEx.

    Jay Stanley, a policy analyst with the ACLU, told the Virginian Pilot: 'There's a simple principle that we've always had in this country, which is that the government doesn't get to watch everybody all the time just in case somebody commits a crime.'

    'The United States is not China,' he continued. 'But these cameras are being deployed with such density that it's like GPS-tracking everyone.'

    In response to Forbes' report that FedEx was part of Flock's surveillance system, he told the outlet: 'It raises questions about why a private company…would have privileged access to data that normally is only available to law enforcement.'

    He went on to bill it as 'profoundly disconcerting'.

    Flock Safety's cameras are used to track vehicles by their license plates, as well as the make, model, and color of their cars. Other identifying characteristics are also monitored, such as dents and even bumper stickers.

    Lisa Femia, staff attorney at the Electronic Frontier Foundation, warned that FedEx's participation could prove problematic because private firms are not subject to the same transparency laws as cops.

    This, she told Forbes, could '[leave] the public in the dark, while at the same time expanding a sort of mass surveillance network.'

    The Shelby County Sheriff's Office in Tennessee confirmed its partnership with Flock in an email to Forbes.

    'We share reads from our Flock license plate readers with FedEx in the same manner we share the data with other law enforcement agencies, locally, regionally, and nationally,' public information officer John Morris told the outlet.

    He also confirmed his department had access to FedEx’s Flock feeds.

    Its participation was unmasked after Forbes found the name of the force on publicly available lists of data sharing partners - along with others such as the Pittsboro Police Department in Indiana, located just outside of Indianapolis.

    Pittsboro police chief Scott King reportedly did not comment on why his department is participating but insisted the force had not requested access to a private system.

    'Only those listed under law enforcement,' he said.

    Assistant Chief of Greenwood Police Department Matthew Fillenwarth confirmed its force, also in Indiana, is similarly participating.

    Memphis police department also stated it had received camera feeds from FedEx but did not confirm if these were provided by Flock.

    When speaking about networks of license plate readers, Brett Max Kaufman, a senior staff attorney at the American Civil Liberties Union (ACLU), told Forbes: 'The scale of this kind of surveillance is just incredibly massive.'

    He went on to describe to the outlet how the warrantless monitoring of citizens en masse was 'quite horrifying'.

    FedEx declined to answer questions about its partnership with Flock, saying in a statement: 'We take the safety of our team members very seriously. As such, we do not publicly discuss our security procedures.'

    There is no suggestion the partnership is illegal, but some critics suggest it is flouting the basic tenets of the Constitution

    For now, it is currently unclear just how far-reaching the partnership between law enforcement and FedEx actually is or how much Flock data is being shared.

    Forbes also found that FedEx was not alone its decision to sign up - with Kaiser Permanente, the largest health insurance carrier in the US, also taking part.

    The company shared data garnered from Flock cameras with the Northern California Regional Intelligence Center, an intelligence hub that provides support to local and federal police investigations involving major crimes across California's west coast.

    'As part of our robust security programs, license plate readers are not only an effective visual deterrent, but the technology has allowed us to collaborate with law enforcement within the parameters of the law,' a spokesperson confirmed.

    'The technology has been used in response to warrants and subpoenas, as well as in other scenarios regarding potential or ongoing crimes on the facilities' premises -and it has supported the arrest and prosecution of those committing crimes.'

    The cameras were labeled to disclose to passersby they were filming - but she declined to comment when asked about where the company had these cameras deployed.

    Meanwhile, police forces around the world over the past few years continue to pick up Flock as a partner - with more than 1,800 law enforcement agencies taking part.

    Overall, more than 3,000 American communities use Flock technology, only ten years since the startup surfaced in 2014.

    The firm today is valued at nearly $4billion, and continues to receive a steady stream of venture capital.

    In 2022, it raised an astounding $300 million in just seven months, followed by $38 million in Series B funding February the following year.

    It uses real-time data 'to enable and incentivize safer driving,' a description on its website states - describing the effort as 'the world's first fully digital insurance company for connected and autonomous commercial vehicles.'

    'Eliminate crime in your community' a chyron geared toward businesses in the private sector - such as grocery stores - reads.

    11
  • 9.4GB Twitter(X) Data Leaked - Over 200 Million Records Exposed Online
    cyberpress.org 9.4GB Twitter(X) Data Leaked - Over 200 Million Records Exposed Online

    The Cyberpress Research Team made a significant discovery with the Massive 9.4 GB of X (Formerly the Twitter) Database.

    9.4GB Twitter(X) Data Leaked - Over 200 Million Records Exposed Online
    1
  • Law enforcement searches of Clearview AI facial recognition doubled in past year
    web.archive.org Law enforcement searches of Clearview AI facial recognition doubled in past year

    Agencies nationwide searched Clearview AI's database 2 million times over the past year, CEO Hoan Ton-That said.

    Law enforcement searches of Clearview AI facial recognition doubled in past year

    The number of facial recognition searches law enforcement conducted via controversial Clearview AI technology doubled to 2 million over the past year, the company said Thursday.

    The number of images stored in the company’s database of faces, which is used to compare biometrics, also has surged, now totalling 50 billion, according to a statement from CEO Hoan Ton-That.

    Last November the database contained 40 billion images, Time reported, quoting Ton-That.

    Biometric Update first reported the new statistics.

    The use of facial recognition technology by law enforcement continues to draw close scrutiny. Some police departments have banned use of the technology, but have been exposed for repeatedly asking nearby departments to run searches on their behalf.

    Critics have worried that police will abuse the technology, a fear born out in an Evansville, Indiana, episode earlier this month. An officer there resigned under pressure after officials found he had been using the technology for personal reasons, searching social media accounts.

    Earlier this month Clearview preliminarily settled a class action lawsuit accusing it of invading people’s privacy by agreeing to give the class who filed suit and whose faces appear in its database a 23 percent stake in the company.

    The company’s technology is used by federal law enforcement agencies and police departments nationwide. A vast majority of Americans' faces are in the database.

    0
  • Devs claim Apple is banning VPNs in Russia 'more effectively' than Putin
    web.archive.org Kremlin killed our VPN in Russian App store, claim devs

    Mozilla shows guts with its extensions – but that's the way the Cook, he crumbles

    Kremlin killed our VPN in Russian App store, claim devs

    Updated At least two VPNs are no longer available for Russian iPhone users, seemingly after the Kremlin's internet regulatory agency Roskomnadzor demanded Apple take them down.

    Red Shield VPN, which is focused on providing its services to Russian users, claims it received a note from Apple that says its VPN was removed from the Russian App Store. The email, which the VPN operator shared on X, says Cupertino had to remove the app from the App Store in Russia since the software did not "conform with all local laws." This is after the Kremlin had apparently spent years trying technological approaches to block the use of the VPN.

    "Apple's actions, motivated by a desire to retain revenue from the Russian market, actively support an authoritarian regime," Red Shield said in a statement.

    "Over the past six years, Russian authorities have blocked thousands of Red Shield VPN nodes but have been unable to prevent Russian users from accessing them. Apple, however, has done this job much more effectively for them.

    Updated At least two VPNs are no longer available for Russian iPhone users, seemingly after the Kremlin's internet regulatory agency Roskomnadzor demanded Apple take them down.

    Red Shield VPN, which is focused on providing its services to Russian users, claims it received a note from Apple that says its VPN was removed from the Russian App Store. The email, which the VPN operator shared on X, says Cupertino had to remove the app from the App Store in Russia since the software did not "conform with all local laws." This is after the Kremlin had apparently spent years trying technological approaches to block the use of the VPN.

    "Apple's actions, motivated by a desire to retain revenue from the Russian market, actively support an authoritarian regime," Red Shield said in a statement.

    "Over the past six years, Russian authorities have blocked thousands of Red Shield VPN nodes but have been unable to prevent Russian users from accessing them. Apple, however, has done this job much more effectively for them.

    "This is not just reckless but a crime against civil society. The fact that a corporation with a capitalization larger than Russia's GDP helps support authoritarianism says a lot about the moral principles of that corporation."

    Le VPN also says it was taken off of the Russian App Store, and shared the same email.

    Roskomnadzor has been on a bit of a banning spree lately, and previously tried to pressure Mozilla into removing five apps, including VPNs, from the Mozilla store. However, the Firefox maker reversed its ban after a week and the five apps have since remained up.

    Google has apparently received similar requests, Russian internet freedom NGO Roskomsvoboda (not to be confused with Roskomnadzor) told The Register.

    "We also know that Google has received similar requests from the Russian regulatory agency and has even notified some proxy services that they might face removal," Roskomsvoboda claims. "However, it has not taken any action so far."

    Roskomsvoboda believes eight VPN apps are no longer available on the Russian App Store, including popular ones such as NordVPN, Proton, and Private Internet Access.

    However, not all of these VPNs were seemingly taken down recently or even by Roskomnadzor itself. "We have not received any communication from Apple as we have unlisted our apps from Russian versions of application stores ourselves back in 2023," a NordVPN representative told The Register. We've reached out to Roskomsvoboda for clarification on what VPNs Roskomnadzor itself has banned lately.

    Vladimir Putin's Russia has been struggling to get VPNs taken down for years now, and when a Russian senator claimed in October that 2024 would see Roskomnadzor finally crack down on VPNs in a big way, it wasn't clear if anything would come of it. However, it seems Roskomnadzor's fresh focus is on stopping VPN apps from being distributed rather than merely blocking VPN servers.

    We have asked the Tim Cook-run Apple to confirm it sent the notifications.

    1
  • Mississippi’s Age-Verification Digital ID Law is Declared Unconstitutional

    A Mississippi state law (introduced as House Bill 1126) that, among other things, requires platforms to implement age verification, has been declared as by and large unconstitutional by the US District Court for the Southern District of Mississippi.

    The law was challenged by Big Tech trade group NetChoice in the NetChoice v. Fitch lawsuit. As the law was to come into force on July 1, the plaintiff asked for a preliminary injunction to prevent enforcement.

    This has now been granted in part and denied in part by the district court, which found that “a substantial number, if not all, of H.B. 1126’s applications are unconstitutional judged in relation to its legitimate sweep.”

    We obtained a copy of the decision for you here.

    Observers now expect to see how the ruling might satisfy the – apparent – direction the Supreme Court is giving to lower courts, or affect its own future decisions.

    Related: The 2024 Digital ID and Online Age Verification Agenda

    Meanwhile, the Mississippi law is yet another in a series of legislative efforts introduced under the banner of protecting children from predatory behavior online. As summed up by the court, one of the bill’s sections requires “all users” – adults and children – to verify their age.

    This would be necessary to create an account, on what is referred to as non-excluded internet services, while another provision calls for parental consent in case a minor is opening that account.

    “This burdens adults’ First Amendment rights, and that alone makes it overinclusive,” is how the court explained its ruling against age verification.

    Meanwhile, the legislation also sets limitations on the data the relevant online services can collect. In addition, platforms are required to make “commercially reasonable efforts to develop and implement a strategy to prevent or mitigate the known minor’s exposure to harmful material.”

    The data collection provision was not contested in the lawsuit, and so the court opinion issued this week does not deal with that – while it grants motions for a preliminary injunction concerning other provisions, as filed by NetChoice.

    The court – unlike some observers, who warn about the consequential harmful effects of such efforts – sees nothing but good intentions behind bills like H.B. 1126, but is critical of the way it (vaguely) delineates its own scope and even the definition of a digital service provider.

    In addition, the court cites the lack of specificity concerning how tech platforms would go about ascertaining somebody’s parental status, and as in general being “either overinclusive or underinclusive, or both.”

    The case is now expected to move into the appeals stage with the Fifth Circuit.

    1
  • Australia's Chief Censor To Force Online Digital ID Within Six Months
    web.archive.org Australia's Chief Censor To Force Online Digital ID Within Six Months

    Critics argue the move could destroy online anonymity and compromise physical safety under various regimes.

    Australia's Chief Censor To Force Online Digital ID Within Six Months

    It’s really a no-brainer both for politicians and those crafting the wording and perception of their policies.

    Namely – if you want genuinely complex and controversial initiatives (such as those related to mass surveillance and privacy infringements) fast-tracked both in legislatures and the media/public, just frame them as geared toward “child safety.”

    Job done. Not many will even attempt to stand up to this, even if arguments in favor are patently disingenuous.

    One gets the sense this is what Australia’s “chief censor” –eSafety Commissioner Julie Inman Grant – is there to do – and she seems to understand her assignment well. Whether she succeeds, though, is a whole different question.

    For right now, Grant is not letting up on trying to attack online security and privacy via demands for swift implementation of age verification schemes by online platforms.

    Grant is now setting a six-month deadline and threatening mandatory codes unless these platforms play along.

    It might bear repeating, and louder, “for the people in the back”: The only way to truly verify anyone’s age online is for adults with a government-issued ID to present a copy of it to the platforms ruling the internet – ruled by governments.

    This effectively destroys online anonymity, and in many countries and under many regimes, people’s (physical) safety.

    To her “credit” – Grant does seem to always be more concerned about how her initiatives are perceived, rather than what they actually realistically can achieve.

    And so reports say her latest push is to have online platforms implement age verification over the next six months or be forced to do so by a “mandatory code.”

    The alternative to the country enforcing “child safety rules” is that these rules will eventually be imposed.

    (The rules in question are related to access to pornography but also “other inappropriate” content; “suicide,” and “eating disorders” are lumped into this, and, it’s unclear if “eating disorders” as defined by Grant, include only undereating, or overeating as well.)

    Effectively, Grant has set October 3 of this year as the deadline for tech companies to tell the Australian government how they plan to implement their own “codes” – before the government does it for them. As any good democratic government does /s.

    The scope of the envisaged standards is quite wide: standards for “app stores, websites including pornography and dating websites, search engines, social media platforms, chat services, and even multi-player gaming platforms check(ing) that content is suitable for users,” Grant is quoted.

    7
1 Active user