

Winterboard, that’s it!
Winterboard, that’s it!
Ten years later, they finally replicated my iPhone 5 jaibreak theme and widgets! Well, partially.
What was that Cydia theming app called… it was titled in leetspeak, I think?
Sorry, I meant the KA1 or KA3, got them mixed up. My KA3 was like $50 used.
I use it on my PC, too.
Considering the cost in reference to the hardware, and that I can use it basically forever, and that it’s a lower distortion DAC than any phone? It’s not bad. And it’s a barely-noticable addon for my headphones that just lives on the cord.
Eh, even if they got it right and more popular, it would have enshittified quick.
TBH getting a nice dongle like a Fiio KA5 is not so bad. It’s small enough to just hang off the cord, and sounds better anyway, and you don’t have to throw it away every phone switch.
Fast refresh rates are amazing. I cherished my old Razer Phone 2.
My last iPhone was a iPhone 5. Or 6, maybe?
Fast forward, and I’ve been on Android until right now, when I got an iPhone 16 in a loss-leader sale.
…And I am astounded by how much worse it is. My old jailbroken iPhone’s UI was both simpler and 100x times more customizable and useful than all these bizzare required gestures; I spent days trying to teach my Mom and grandpa how to use it, to no avail. At the same time, its as uncustomizable as ever.
I had basically every feature the 16 has now, like the action button, and more. And it somehow feels slower in browsing than my SD845 Android 9 phone.
It wasn’t perfect back then, but the App Store is flooded with garbage now.
I literally want my iPhone 5 back. WTF has Apple been doing?
I feel like the standard should be two phones. A disposable ‘banking’ phone: tiny, no camera, no speakers, small SoC, just the absolute bare minimum to live.
…And then a ‘media’ phone without all the enshittification.
The iOS store needs the ability to report fraid which it doesn’t sort until you install an app.
That’s probably to reduce brigading? Android and iOS are infested with all sorts of fraduelnt marketing techniques like fake reviews, and mass fraud reporting for competition sounds like another.
Honestly I don’t think many people would care? Until the security holes became intractable, I guess.
Its proven Android phones are doing awful stuff, even client side, and has that slowed them down?
this time around all it would allow was “disable”.
This has been par for other OEM-flavored Android phones for years, unfortunately.
Disable
is alright, not that the phone itself isn’t a privacy nightmare in other ways.
This could really suck for us because customers without a good advertising ‘paper trail’ (like many on Lemmy, I imagine) could get slapped with high default pricing.
…Otherwise (if they default to low pricing), people would try to game it, and they’re probably aware of that.
Eh, there’s not as much attention paid to them working across hardware because AMD prices their hardware uncompetitively (hence devs don’t test them much), and AMD themself focuses on the MI300X and above.
Also, I’m not sure what layer one needs to get ROCM working.
Even the small local AI niche hates ChatGPT, heh.
Clickbait.
There may be thought in a sense.
A analogy might be a static biological “brain” custom grown to predict a list of possible next words in a block of text. It’s thinking, sorta. Maybe it could acknowledge itself in a mirror. That doesn’t mean it’s self aware, though: It’s an unchanging organ.
And if one wants to go down the rabbit hole of “well there are different types of sentience, lines blur,” yada yada, with the end point of that being to treat things like they are…
All ML models are static tools.
For now.
It depends!
Exllamav2 was pretty fast on AMD, exllamav3 is getting support soon. Vllm is also fast AMD. But its not easy to setup; you basically have to be a Python dev on linux and wrestle with pip. Or get lucky with docker.
Base llama.cpp is fine, as are forks like kobold.cpp rocm. This is more doable without so much hastle.
The AMD framework desktop is a pretty good machine for large MoE models. The 7900 XTX is the next best hardware, but unfortunately AMD is not really interested in competing with Nvidia in terms of high VRAM offerings :'/. They don’t want money I guess.
And there are… quirks, depending on the model.
I dunno about Intel Arc these days, but AFAIK you are stuck with their docker container or llama.cpp. And again, they don’t offer a lot of VRAM for the $ either.
NPUs are mostly a nothingburger so far, only good for tiny models.
Llama.cpp Vulkan (for use on anything) is improving but still behind in terms of support.
A lot of people do offload MoE models to Threadripper or EPYC CPUs, via ik_llama.cpp, transformers or some Chinese frameworks. That’s the homelab way to run big models like Qwen 235B or deepseek these days. An Nvidia GPU is still standard, but you can use a 3090 or 4090 and put more of the money in the CPU platform.
You wont find a good comparison because it literally changes by the minute. AMD updates ROCM? Better! Oh, but something broke in llama.cpp! Now its fixed an optimized 4 days later! Oh, architecture change, not it doesn’t work again. And look, exl3 support!
You can literally bench it in a day and have the results be obsolete the next, pretty often.
Depends. You’re in luck, as someone made a DWQ (which is the most optimal way to run it on Macs, and should work in LM Studio): https://huggingface.co/mlx-community/Kimi-Dev-72B-4bit-DWQ/tree/main
It’s chonky though. The weights alone are like 40GB, so assume 50GB of VRAM allocation for some context. I’m not sure what Macs that equates to… 96GB? Can the 64GB can allocate enough?
Otherwise, the requirement is basically a 5090. You can stuff it into 32GB as an exl3.
Note that it is going to be slow on Macs, being a dense 72B model.
One last thing: I’ve heard mixed things about 235B, hence there might be a smaller, more optimal LLM for whatever you do.
For instance, Kimi 72B is quite a good coding model: https://huggingface.co/moonshotai/Kimi-Dev-72B
It might fit in vllm (as an AWQ) with 2x 4090s. It and would easily fit in TabbyAPI as an exl3: https://huggingface.co/ArtusDev/moonshotai_Kimi-Dev-72B-EXL3/tree/4.25bpw_H6
As another example, I personally use Nvidia Nemotron models for STEM stuff (other than coding). They rock at that, specifically, and are weaker elsewhere.
Irony is Android felt way more intuitive, including to my non techy family, even in the worst case (EG Samsung devices with their spammy UI).