The performance of DeepSeek models has made a clear impact, but are these models safe and secure? We use algorithmic AI vulnerability testing to find out.
See my comment above regarding the state actors. The Chinese government apparently tries to influence the narratives on Tiktok.
I would also like to make some criticism of the so-called 'manufactured consent'. Chomsky and Herman made some points on corporate media, but their conclusion is wrong. People do not consent on news just because they can't influence the content. You can 'manufacture the news' -as is done by corporate media in the U.S. and 'the West' as well as in China by the Communist Party- but that does not mean people consent.
China, in particular, has developed sophisticated strategies to control narratives and influence public opinion through digital platforms. This phenomenon, often referred to as “networked authoritarianism,” involves state actors using subtle tactics like algorithmic manipulation and strategic content curation to shape narratives on popular social media platforms.
@Onno
No, it's not entirely open source as the datasets and code used to train the model are not.
In addition to my comments, we can add that Wiz Research uncovered exposed DeepSeek database leaking sensitive information, including chat history.
TLDR: DeepSeek had left over a million lines of sensitive data exposed on the open internet, including digital software keys.
Deepseek ist nicht Open Source, aber die Entwickler von Hugging Face arbeiten an einer offenen Version:
Hugging Face developers are in the process of building a fully open reproduction of DeepSeek-R1
Researchers say they had a ‘100% attack success rate’ on jailbreak attempts against Chinese AI DeepSeek
cross-posted from: https://lemmy.sdf.org/post/28910537
Researchers claim they had a ‘100% attack success rate’ on jailbreak attempts against Chinese AI DeepSeek
"DeepSeek R1 was purportedly trained with a fraction of the budgets that other frontier model providers spend on developing their models. However, it comes at a different cost: safety and security," researchers say.
A research team at Cisco managed to jailbreak DeepSeek R1 with a 100% attack success rate. This means that there was not a single prompt from the HarmBench set that did not obtain an affirmative answer from DeepSeek R1. This is in contrast to other frontier models, such as o1, which blocks a majority of adversarial attacks with its model guardrails.
...
In other related news, experts are cited by CNBC that [DeepSeek’s privacy pol
In related news:
Using algorithmic jailbreaking techniques, our team applied an automated attack methodology on DeepSeek R1 which tested it against 50 random prompts from the HarmBench dataset. These covered six categories of harmful behaviors including cybercrime, misinformation, illegal activities, and general harm.
The results were alarming: DeepSeek R1 exhibited a 100% attack success rate, meaning it failed to block a single harmful prompt. This contrasts starkly with other leading models, which demonstrated at least partial resistance.
CNBC reports that DeepSeek’s privacy policy “isn’t worth the paper it is written on.”
Seems to be a long way to go, but Hugging Face developers are in the process of building a fully open reproduction of DeepSeek-R1 as the AI is not Open Source as it claims.
Researchers say they had a ‘100% attack success rate’ on jailbreak attempts against Chinese AI DeepSeek
The performance of DeepSeek models has made a clear impact, but are these models safe and secure? We use algorithmic AI vulnerability testing to find out.
cross-posted from: https://lemmy.sdf.org/post/28910537
Researchers claim they had a ‘100% attack success rate’ on jailbreak attempts against Chinese AI DeepSeek
"DeepSeek R1 was purportedly trained with a fraction of the budgets that other frontier model providers spend on developing their models. However, it comes at a different cost: safety and security," researchers say.
A research team at Cisco managed to jailbreak DeepSeek R1 with a 100% attack success rate. This means that there was not a single prompt from the HarmBench set that did not obtain an affirmative answer from DeepSeek R1. This is in contrast to other frontier models, such as o1, which blocks a majority of adversarial attacks with its model guardrails.
...
In other related news, experts are cited by CNBC that [DeepSeek’s privacy pol
Researchers say they had a ‘100% attack success rate’ on jailbreak attempts against Chinese AI DeepSeek
The performance of DeepSeek models has made a clear impact, but are these models safe and secure? We use algorithmic AI vulnerability testing to find out.
Researchers claim they had a ‘100% attack success rate’ on jailbreak attempts against Chinese AI DeepSeek
"DeepSeek R1 was purportedly trained with a fraction of the budgets that other frontier model providers spend on developing their models. However, it comes at a different cost: safety and security," researchers say.
A research team at Cisco managed to jailbreak DeepSeek R1 with a 100% attack success rate. This means that there was not a single prompt from the HarmBench set that did not obtain an affirmative answer from DeepSeek R1. This is in contrast to other frontier models, such as o1, which blocks a majority of adversarial attacks with its model guardrails.
...
In other related news, experts are cited by CNBC that [DeepSeek’s privacy policy “isn’t worth the paper it is written on."](https://www.cnbc.com/2025/02/02/
UAC-0063: Cyber Espionage Operation Expanding from Central Asia to Europe, Targeting High-Profile Institutions Including Government Entities, Diplomatic Missions
cross-posted from: https://lemmy.sdf.org/post/28777516
Bitdefender Labs warns of an active cyber-espionage campaign targeting organizations in Central Asia and European countries. The group, tracked as UAC-0063, employs sophisticated tactics to infiltrate high-value targets, including government entities and diplomatic missions, expanding their operations into Europe.
Since the start of the Ukraine war , the geopolitical landscape of Central Asia has undergone significant shifts, impacting the region's relationships with both Russia and China. Russia's influence, once dominant, has noticeably declined due to its actions in Ukraine, which have damaged its reputation as a regional security guarantor, with some Central Asian countries feeling that Russia doesn't respect their sovereignty.
In contras
UAC-0063: Cyber Espionage Operation Expanding from Central Asia to Europe, Targeting High-Profile Institutions Including Government Entities, Diplomatic Missions
Bitdefender Labs warns of an active cyber-espionage campaign targeting organizations in Central Asia and European countries.
Bitdefender Labs warns of an active cyber-espionage campaign targeting organizations in Central Asia and European countries. The group, tracked as UAC-0063, employs sophisticated tactics to infiltrate high-value targets, including government entities and diplomatic missions, expanding their operations into Europe.
Since the start of the Ukraine war , the geopolitical landscape of Central Asia has undergone significant shifts, impacting the region's relationships with both Russia and China. Russia's influence, once dominant, has noticeably declined due to its actions in Ukraine, which have damaged its reputation as a regional security guarantor, with some Central Asian countries feeling that Russia doesn't respect their sovereignty.
In contrast, China's influence in Central Asia is growing, particularly in the ec
The first point that I found suspicious with this post is the source. The SCMP is controlled by the Chinese government.
How does that 'empower' people if they 'must go offline' to discuss whatever they want? This doesn't make sense.
Yes, it's unblocked for a few weeks after a long investigation. The talian authorities fined OpenAI EUR 15m for GDPR violations. You can find a very detailed analysis about the issues here.
Italy blocks access to the Chinese AI application DeepSeek to protect users' data, announces investigation into the companies behind chatbot
Italy’s data protection authority has blocked use of Chinese tech startup DeepSeek’s AI application to protect Italians’ data and announced an investigation into the companies behind the chatbot.
cross-posted from: https://lemmy.sdf.org/post/28774804
Italy’s data protection authority on Thursday blocked access to the Chinese AI application DeepSeek to protect users’ data and announced an investigation into the companies behind the chatbot.
The authority, called Garante, expressed dissatisfaction with DeepSeek’s response to its initial query about what personal data is collected, where it is stored and how users are notified.
“Contrary to the authority’s findings, the companies declared that they do not operate in Italy, and that European legislation does not apply to them,’’ the statement said, noting that the app had been downloaded by millions of people around the globe in just a few days.
DeepSeek’s new chatbot has raised the stakes in the AI technology race, rattling markets and catch
Italien: Datenschutzbehörde beschränkt Nutzung von chinesischem KI-Programm Deepseek
Die italienische Datenschutzaufsicht geht gegen die neue KI-App aus China vor. Bereits mit ChatGPT gab es Knatsch.
Die italienische Datenschutzbehörde (GPDP) hat den chinesischen Unternehmen hinter dem KI-Programm Deepseek faktisch die Nutzung von Daten in Italien untersagt. Die Beschränkung sei den Firmen Deepseek Artificial Intelligence und Beijing Deepseek Artificial Intelligence „dringend und mit sofortiger Wirkung“ auferlegt worden, erklärte die GPDP am Donnerstag. Zugleich seien Ermittlungen gegen die Firmen aufgenommen worden.
Am Mittwoch hatte die GPDP nach eigenen Angaben bei den Unternehmen angefragt, welche personenbezogenen Daten sie bei italienischen Nutzern sammelten. Die Antwort der Firmen hierauf sei „völlig unzureichend“, daher sei die Beschränkung gegen die Deepseek-Betreiber verfügt worden. Ziel sei es, „die Daten italienischer Nutzer zu schützen“.
[...]
Die italienische Datenschutzbehörde hatte im Jahr 2023 bereits gegen das US-KI-Unternehmen OpenAI wegen dessen Programm ChatGPT Ermittlungen eröffnet. Im Dezember 2024 schloss sie diese ab und verhängte unter anderem eine Geld
Italy blocks access to the Chinese AI application DeepSeek to protect users' data, announces investigation into the companies behind chatbot
Italy’s data protection authority has blocked use of Chinese tech startup DeepSeek’s AI application to protect Italians’ data and announced an investigation into the companies behind the chatbot.
Italy’s data protection authority on Thursday blocked access to the Chinese AI application DeepSeek to protect users’ data and announced an investigation into the companies behind the chatbot.
The authority, called Garante, expressed dissatisfaction with DeepSeek’s response to its initial query about what personal data is collected, where it is stored and how users are notified.
“Contrary to the authority’s findings, the companies declared that they do not operate in Italy, and that European legislation does not apply to them,’’ the statement said, noting that the app had been downloaded by millions of people around the globe in just a few days.
DeepSeek’s new chatbot has raised the stakes in the AI technology race, rattling markets and catching up with American generative AI leaders at a fraction of the cost.