

No, I don’t think I wil.


No, I don’t think I wil.


Microsoft-supported formats are badly documented, and regularly broken by updates of the software before changes are understood (if there’s even an update to the loose spec we used to have). That’s a problem.


That’s… not applicable here. Like, at all. To reproduce a printed document, you input it. To make a 3D print, you produce tailored list of operations depending on many, many settings. Usually, the file that reach the printer have little in the way of knowing what is printed, aside from expensive reconstruction that would only give the general shape, if even that. And even if you can send actual 3D model files to a printer that would do the slicing locally, there’s no “absolutely required” fingerprint there. A tube is a tube.
And, just so you know, there’s a slew of public printers and scanners that will just plain not recognize any of this, too. There’s also some “protection” pattern in some official document; large office printers would choke on them, where a home scanner was fine. This is, at best, only enforceable in the flimsiest of ways.


Let’s entertain the thought. How would one identify what is a gun part being printed, and what is a tube, a mechanical latch, or whatever else. Heck, I printed a plastic replica of a movie prop once. Would that be illegal?
I mean, I’m not in the US, and I know how to drive three steppers according to a list of extremely basic instructions that never ever represent anything “final part-y” looking, but the question remains. How do we go from “lots of gcode” to “yep, that’s definitely illegal” without saying that everything is illegal?


That’s basically what we used to do before big printer came in :D


Private workshop are next on the chopping block, then. Totally feasible. /s
Which one? The only thing mentioned by name here is gnome.


If the entire supply chain up to the software you’re running to perform actual decryption is compromised, then the decrypted data is vulnerable. I mean, yeah? That’s why we use open-source clients and check builds/use builds from separate source, so that the compromission of one actor does not compromise the whole chain. Server (if any) is managed by one entity and only manage access control + encrypted data, client from separate trusted source manage decryption, and the general safety of your whole system remain your responsibility.
Security requires a modicum of awareness and implication from the users, always. The only news here is that people apparently never consider supply chain attacks up until now?


a novelty security feature for hubcaps that you don’t want to be removed too easily
If this picks up, the people you’d want to not be able to remove these too easily will be the first to have the adequate tools to remove them easily.


Didn’t they already do that in their public posts or whatever? They don’t care.


Matrix, the central service, might work, but I’m not sure if it could handle the load well. Matrix, the federated service, hosted by many people, have performance issues with the “free” version. I could not test the paid/more optimized version, so I can’t talk about that.
Anyway, the protocol and clients have their issues. All these stems from usage; I did not do a deep dive in the internal of it. But on the top of my head:
With that said, nothing’s actually a show stopper for small usage, and the heavily optimized server might handle itself well enough, as long as you’re mainly concerned with having text rooms. But open instances handling hundreds of users might be a stretch… for now. Maybe this will cause more development into the Matrix/Element ecosystem.


Math have little room for backdoors.
To test a very stubborn program I had to install windows in a VM and use it for 20 minutes yesterday. It felt like I was swimming in a swamp located at the exit pipes of a factory that exclusively produce shit and deadly biohazard material.
You can put your taskbar in the middle of the screen if you so desire!


Unless there’s an incredible amount of people “not in” on some universal secret, maths gonna maths, and physics gonna physics. Actual encryption works well in a proven way, computational power isn’t as infinite as some people think, and decent software implementations exists.
Getting hold of anything properly encrypted with no access to the key still requires an incredible amount of computing power to brute force. Weak/bad implementations can leave enough info back to speed this up, malicious software can make use of an extra, undocumented encryption key, etc. but a decent implementation would not be easy to break in.
Now, this does not say anything about what Apple actually do. They claim to have proper encryption, but with anything closed source, you only have your belief to back you up. But it’s not an extraordinary claim to say that this can be done competently. And Apple would have a good incentive in doing so: good PR, and no real downside for them since people happily unlock their phone to keep their software running and doing whatever it wants locally.


I don’t know how most package managers on windows work, but usually, auto updates are disabled by default for software that comes from one. For example, Firefox installed using APT on various linux distro will not auto-update out of it.
I vaguely remember chocolatey packages not really doing that, causing mismatch between installed versions and its internal database, though, so maybe it wasn’t that good of a solution.


The software itself, and the devs, have little to nothing to do with this besides detecting the issue. Which was not obvious, since (it seems) the attack was targeted at specific IPs/hosts/places. It likely worked transparently without alteration for most users, probably including the devs themselves.
It also would only affects updates through the built-in updater; if you disabled that, and/or installed through some package managers, you would not have been affected.
A disturbing situation indeed. I assume some update regarding having adequately digitally signed updates were done (at least, I hope… I don’t really use N++ anymore). But the reality is, some central infrastructure are vulnerable to people with a lot of resources, and actually plugging those holes requires a bit of involvement from the users, depending how far one would go. Even if everything’s signed, you have to either know the signatory’s public key beforehand or get a certificate that you trust. And that trust is derived from an authority you trust (either automatically through common CA lists, or because you manually added it to your system). And these authorities themselves can become a weak point when a state actor butts in, meaning the only good solution is double checking those certificates with the actual source, and actually blocking everything when they change, which is somewhat tedious… and so on and so on.
Of course, some people do that; when security matters a LOT. But for most people, basic measures should be enough… usually.


Notepad++ installed from any package manager was perfectly fine and safe.


I’ve kind of stopped following things up since I left windows, but maybe you’re remembering when this actually happened a while ago? This is just some in-progress post-mortem report.
Yeah. I got a hunch of that a while ago, while trying some “old” scenarios of de-anonymization we used to do by hand. Just asking questions and posting pictures got surprisingly accurate results. A single picture with (to me) no significant landmark could lead to localizing a specific part of a city, and that was using a local LLM with a relatively small model, running on a 16GB VRAM 4060Ti.
It is now time to remember fondly the time where the younger people were warned by older people to not post all their stuff online, not over-share, be cautious about strangers, etc. I’m not sure when we lost that, but oh boy, it’s a festival.