

Yeah I agree on these fronts. The hardware might be good but software frameworks need to support it, which historically has been very hit or miss.


Yeah I agree on these fronts. The hardware might be good but software frameworks need to support it, which historically has been very hit or miss.


Depends strongly on what ops the NPU supports IMO. I don’t do any local gen AI stuff but I do use ML tools for image processing in photography (e.g. lightroom’s denoise feature, GraXpert denoise and gradient extraction for astrophotography). These tools are horribly slow on CPU. If the NPU supports the right software frameworks and data types then it might be nice here.


You’re correct about all of this, but it’s way easier to press print than machine a part from stock. I do some machining as well (I don’t own the machines, but I’m trained on the mill, lathe, and waterjet in our shop). So most of the time if I can get away with a 3d printed part, it’s worth it for the time savings alone. Plus sometimes the easiest or optimal geometry to design is not something that can be machined, but can be printed.
It’s specific circumstances where the basic filaments fall short, like creep and heat resistance, irrespective of print parameters. ASA and PET-CF work well in most of these spots, so I don’t do anything more exotic.


I’ll need to give this a read, but I’m not super sure what’s novel here. The core idea sounds a lot like GaussianImage (ECCV '24), in which they basically perform 3DGS except with 2D gaussians to fit an image with fewer parameters than implicit neural methods. Thanks for the breakdown!


If you have multiple views of the object and can take a video, NeRF and Gaussian Splatting tools can form a 3d model if you have an NVIDIA GPU. I don’t know if there are good user facing tools for this though (I mess with these things in my research), if you have a technical background you might be able to get NeRF Studio to work.


That’s all fair! For myself I use a lot of PET-CF, especially annealed. For some applications you can get away with the stiffness and the creep resistance provided by annealed PET-CF rather than needing a machined part, so for me an air fryer or equivalent is a must (for both drying and annealing). I build stuff for astrophotography, so having a material that is heavily resistant to creep and is also stiff is a must.
I’ll note that for PETG, if your print is under nontrivial load, it will probably start to deform well below the 80C mark. Continuous use I believe is about 70C. Though because PETG is so inexpensive, you can always just reprint as long as assembly isn’t too difficult.
You’re right that PLA, PETG, and TPU are like 90% of anyone’s needs though.


Having a way to dry filament is a good idea. You can do so with a cheap food dehydrator or a dedicated filament dry for the purpose.
My favorite way is an air fryer, as it can actually hit the temperatures needed for certain engineering filaments (ASA-CF, PET-CF, PPS-CF) and the forced air combined with the fact that they aren’t sealed tends to be more effective than the spoolholding dryers. I then print from a dry box made from cereal boxes with molecular sieves. This is overkill if you are just printing standard filaments though (PLA, PETG, TPU, etc.)
I’ll add that there are still fairly common situations when you’ll want ABS/ASA: If you’re building something with stepper motors (say parts for a printer), ABS/ASA’s higher temperature resistance means you can push more current through your motors without deforming the print where the motor is mounted. This is of course especially helpful if you’re putting parts into a heated chamber, where PETG will likely start to deform under prolonged use at 60C+ temperatures. ABS/ASA are also more rigid, so they’re better for high speed printer parts. Finally, if you’re putting something in a car in a hot environment, PETG will not really hold up, but ASA will.


W.r.t your spool question, ABS is still a great material choice for a lot of applications since it has pretty well rounded properties. Reasonably strong, reasonably rigid (but not brittle), reasonably creep resistant, and fairly temperature resistant (probably the cheapest filament that can withstand a hot car). It’s generally a bit tricky to print though. You need an enclosed printer for good results (much better layer adhesion and less risk of warping and cracking) and it’ll emit styrene fumes which you don’t want to be breathing. I always put my printer outdoors if I’m printing ABS or its replacement in ASA.
If you don’t care about that high temperature resistance and just want decent impact strength, then PETG is an acceptable alternative. It’s pretty cheap, easy to print, but is a little more flexible than ABS. It has decent creep resistance as well, unlike PLA.


Yeah, it’s absolutely not at the level of beginner and user-friendliness that you’d expect out of a professional CAD package yet, so it’s understandable you had a rough experience. I think we’re all hoping that FreeCAD will eventually see a similar level of improvement that Blender and KiCAD got in this area. Both of them were originally much worse in terms of usability, but after enough effort (and investment from major players like CERN in the case of KiCAD and community members), they ended up being really competitive packages.
Their GPU situation is weird. The gaming GPUs are good value, but I can’t imagine Intel makes much money from them due to the relatively low volume yet relatively large die size compared to competitors (B580 has a die nearly the size of a 4070 despite being competing with the 4060). Plus they don’t have a major foothold in the professional or compute markets.
I do hope they keep pushing in this area still, since some serious competition for NVIDIA would be great.


GrapheneOS patches this behavior if apps match their Google play signature IIRC. This is a behavior that apps on the play store can opt into (basically they block operation if they aren’t installed via Play).
It was rather annoying until recently, since some apps require you to be on a certified Android install to find them in the Play store, but don’t actually check play integrity in the app. These apps when installed via Aurora wouldn’t work for me until Graphene patched this.


Yeah 1.0 has been quite stable for me. I especially recommend the weekly releases with features planned for 1.1, like better sketch projection tools and snapping.


I wouldn’t necessarily say it’s dogshit as I’ve been enjoying the beta releases. What I will say though is that the workflow feels different enough compared to every other commercial CAD program I’ve tried (solidworks, fusion, inventor) that it required me to effectively re-learn the software rather than jump right in. Pretty much every other CAD program didn’t have this problem, in part because they’re more forgiving when you violate best practices.
FreeCAD is much more rigid in comparison. If you follow its best practices, it works wonderfully, but when I came from another CAD program my previous experience kept making me run into issues.


I strongly recommend printing out of a sealed dry box as well. There are lots of good designs based around cereal containers and molecular sieves. For extremely hygroscopic filaments like PET-CF, this is the only way I’ve been able to get good prints.
Yes, but at this point most specialized hardware only really work for inference. Most players are training on NVIDIA GPUs, with the primary exception of Google who has their own TPUs, but even these have limitations compared to GPUs (certain kinds of memory accesses are intractably slow, making them unable to work well for methods like instant NGP).
GPUs are already quite good, especially with things like tensor cores.
Yeah, you can certainly get it to reproduce some pieces (or fragments) of work exactly but definitely not everything. Even a frontier LLM’s weights are far too small to fully memorize most of their training data.


Most “50 MP” cameras are actually quad Bayer sensors (effectively worse resolution) and are usually binned 2x to approx 12 MP.
The lens on your phone likely isn’t sharp enough to capture 50 MP of detail on a small sensor anyway, so the megapixel number ends up being more of a gimmick than anything.


I agree with your thoughts. I hate what Bambu has done to the industry in terms of starting a patents arms race and encouraging other companies to reject open source, but I do love how they’ve pushed innovation and have made 3D printing easier for people just looking for a tool.
I hope the DIY printers like Voron, Ratrig, VzBot, and E3NG can continue the spirit of the RepRap movement.
I work in an area adjacent to autonomous vehicles, and the primary reason has to do with data availability and stability of terrain. In the woods you’re naturally going to have worse coverage of typical behaviors just because the set of observations is much wider (“anomalies” are more common). The terrain being less maintained also makes planning and perception much more critical. So in some sense, cities are ideal.
Some companies are specifically targeting offs road AVs, but as you can guess the primary use cases are going to be military.
I do research in 3D computer vision and in general, depth from cameras (even multi view) tends to be much noisier than LiDAR. LiDAR has the advantage of giving explicit depth, whereas with multiview cameras you need to compute it, which has a fair amount of failure modes. I think that’s what the above user is getting at when they said Waymo actually has depth sensing.
This isn’t to say that Tesla’s approach can’t work at all, but just that Waymo’s is more grounded. There are reasons to avoid LiDAR (cost primarily, a good LiDAR sensor is very expensive), but if you can fit LiDAR into your stack it’ll likely help a bit with reliability.