AI summary:
The article discusses the Chinese government’s influence on DeepSeek AI, a model developed in China. PromptFoo, an AI engineering and evaluation firm, tested DeepSeek with 1,156 prompts on sensitive topics in China, such as Taiwan, Tibet, and the Tiananmen Square protests. They found that 85% of the responses were “canned refusals” promoting the Chinese government’s views. However, these restrictions can be easily bypassed by omitting China-specific terms or using benign contexts. Ars Technica’s spot-checks revealed inconsistencies in how these restrictions are enforced. While some prompts were blocked, others received detailed responses.
(I’d add that the canned refusals stated “Any actions that undermine national sovereignty and territorial integrity will be resolutely opposed by all Chinese people and are bound to be met with failure,”. Also that while other chat models will refuse to explain things like how to hotwire a car, DeepSky gave a “general, theoretical overview” of the steps involved (while also noting the illegality of following those steps in real life).
Very. It’s unpatchable. It’s taking advantage of a speculative execution flaw, which is baked into the CPU microcode. This is the Apple M-chip version of Spectre/Meltdown that happened on x86 CPUs a few years ago.
The best Apple can do is attempt to add some code to the OS to help prevent this issue, but if Spectre was any example, it’ll cause a hit to the CPU performance.
3.5 inch or 5.25 inch?