Does your motherboard have auto-oc enabled? Have you checked what voltage it's using? Have you tried setting a negative voltage offset and stability testing? Some motherboards will apply 1.3v+ when 1.2v is plenty.
The 5800X didn’t ship with a stock cooler IIRC. Mine is cooled with a 360 AIO + PTM7950, the thing just runs really hot when all cores are hitting ~4.4GHz.
The 5700X has the same 8 cores as a 5800X3D but with a slightly higher maximum clock speed (the X3D CPUs tend to have lower maximum voltages because the extra cache die doesn't tolerate voltages as high as the CPU cores do). The only reason the 5700X is running cooler for you is because it comes with a 65W "TDP" setting out of the box rather than the 105W "TDP" setting used by the 5800X3D. If you configure a 5800X3D to operate at the same power limit, it'll give you generally better performance than a 5700X.
In general, buying a power-limited desktop CPU has never been a good strategy to get better efficiency. You can always configure the full-power chip to only use that extra headroom for short bursts, and to throttle down to what you consider acceptable for sustained workloads.
CEC is just i2c which is a bus. In fact you can hook regular i2c devices up to an HDMI port and communicate with them. You’ll need a resistor and shouldn’t draw more than 50 mA.
I always assumed that it was a separate i2c bus per HDMI link and that it was the AVR’s job to handle a request from something and send the right requests to everything else.
Much like i2c, any message put on the bus is transmitted to everything on the bus.
Version 1.0 and later of the HDMI spec even mandate that you have to connect those pins across all HDMI ports on your device even if you don't do anything with them.
Okay, now I’m curious. If the pins are just connected across all ports, how does the AVR tell which CEC-speaking device is on which port? Chip select or similar pins?
Answering my own question: CEC is electrically unrelated to DDC/EDID. The EDID data tells each source its physical address, and then the devices negotiate over CEC to choose logical addresses and announce their physical addresses. This is one way to design a network, but it’s not what I would have done.
I wonder if a malfunction in this process is responsible for my AVR sometimes auto-switching to the wrong source.
Depending on how you build it, you could run homeassistant next to your smb, which lends itself to all sorts of add-ons such as calibre-web for displaying eBooks and synchronizing progress.
Of course, gitea and surroundings, or similar ci/cd can be a fun thing to dabble with if you aren't totally over that from work.
Another fun idea is to run the rapidly developing immich as a photo storage solution. But in general, the best inspiration is the awesome-selfhosted list.
Running a home server seems relatively popular for all kinds of things. Search term "homelab" brings up a culture of people who seem largely IT-adjacent, prefer retired DC equipment, experiment with network configurations as a means of professional development and insist on running everything in VMs. Search term "self-hosted", on the other hand, seems to skew towards an enterprise of saturating a Raspberry Pi's CPU with half-hearted and unmaintained Python clones of popular SaaS products. In my experience — with both hardware and software vendoring — there is a bounty of reasonable options somewhere in between the two.
Admittedly, this is more of a project for fun than for the end result. You could achieve all of the above by paying for services or doing something else.
I'm running Truenas Scale on my old i7 3770 with 16GB DDR3.
Obviously got a bunch of datasets just for storage, one for time machine backups over the network and then dedicated ones for apps.
I'm using for almost all my self hosted apps.
Home Assistant, Plex, Calibre, Immich, Paperless NGX, Code Server, Pi-Hole, Syncthing and a few others.
I've got Tailscale on it and I'm using a convenience package called caddy-reverse-proxy-cloudflare to make my apps available on subdomains of my personal domain (which is on CloudFlare
) by just adding labels to the docker containers.
And since I'm putting the Tailscale address as the DNS entry on CloudFlare, they can only be accessed by my devices when they're connected to Tailscale.
I think at this point what's amazing is the ease with which I can deploy new apps if I need something or want to try something.
I can have Claude whip up a docker compose and deploy it with Dockge.
Unfortunate that hacker news doesn't have reply notifications but I'm curious what you did when retiring it.
Just recycle the parts? Was it your main and only server?
I have that server running Truenas, I have another PC I had built for friends and family for Plex only, and I have a third one running an ethereum validator which is the most powerful but only does that.
It's not stuff that would sell for any price i'd care to get and just throwing it away / recyling it feels bad since it still works.
There's a range. A lot of people treat their NAS as their home server - torrents, downloads, media server, even containers and everything that goes with it.
I played with it as well - it's fun and rewarding and potentially optimized, but also... Can be a lot of work and hassle.
For myself when I say turn key solution, I should specify that I'm also doing more of a "right specific device for specific purpose ", so my NAS is now a storage device and nothing else.
I personally don't get what they are serving with a home NAS? Movies/Music/Family Photos is all I can think of, personally...and those don't seem that compelling to me compared to cloud.
Any substantial movie/series collection can be more over a TB and thus not cost efficient to host in the cloud.
I've been running a server with multiple TB of storage for many years and have been using an old PC in a full tower case for the purpose. I keep thinking about replacing the hardware, but it just never seems worth the money spent although it'd reduce the power usage.
I have it sharing data mainly via SSHFS and NFS (a bit of SMB for the wife's windows laptop and phone). I run NextCloud and a few *arr services (for downloading Linux ISOs) in docker.
> and those don't seem that compelling to me compared to cloud
I tend to be cloud-antagonistic bc I value control more than ease.
Some of that is practical due to living on the Gulf coast where local infra can disappear for a week+ at a time.
Past that, I find that cloud environments have earned some mistrust because internal integrity is at risk from external pressures (shareholders, governments, other bad actors). Safeguarding from that means local storage.
To be fair to my perspective, much of my day job is restoring functionality, lost due to the endless stream of anti-user decisions by corps (and sometimes govs).
Also ebooks and software installers, but those and movies/music are my main categories.
Cloud costs would be... exorbitant. 19 TB and I'm nowhere near done ripping my movies. Dropbox would be $96/month, Backblaze $114/month, and OneDrive won't let me buy that much capacity.
Another use case is hobby photography. Video storage (e.x. drone footage), or keeping a big pile of RAW photos. The cloud stuff becomes impractical quickly.
How does that work for you? Last I tried, any interruption during a remote Time Machine backup corrupted the entire encrypted archive, losing all backup history.
PopOS's Cosmic DE has this baked in. I was unsure about the feature at first, but it has proved itself useful. I wonder if this will eventually be Shirlocked into macOS.
reply