Hacker Newsnew | past | comments | ask | show | jobs | submit | tylerflick's commentslogin

To the benefit of everyone backing up their media libraries.

> 5600X is cheaper and acts less than a heater over the winter

It’s impossible to keep my 5800x below 90C under full load.


The Ryzen CPUs seem to be designed to spike as high as whatever thermal limit you configure with Precision Boost Overdrive.

Does your motherboard have auto-oc enabled? Have you checked what voltage it's using? Have you tried setting a negative voltage offset and stability testing? Some motherboards will apply 1.3v+ when 1.2v is plenty.

Then you need better cooling. The stock coolers are pretty mediocre.

The 5800X didn’t ship with a stock cooler IIRC. Mine is cooled with a 360 AIO + PTM7950, the thing just runs really hot when all cores are hitting ~4.4GHz.

Mine runs at 60C when running Prime95 at full load, i got a open bench case though.

The 7000 series is designed to hit those loads, i wonder how your 5000 series can even reach that.


This is where the 5700X shines. 8 cores, still cool.

The 5700X has the same 8 cores as a 5800X3D but with a slightly higher maximum clock speed (the X3D CPUs tend to have lower maximum voltages because the extra cache die doesn't tolerate voltages as high as the CPU cores do). The only reason the 5700X is running cooler for you is because it comes with a 65W "TDP" setting out of the box rather than the 105W "TDP" setting used by the 5800X3D. If you configure a 5800X3D to operate at the same power limit, it'll give you generally better performance than a 5700X.

In general, buying a power-limited desktop CPU has never been a good strategy to get better efficiency. You can always configure the full-power chip to only use that extra headroom for short bursts, and to throttle down to what you consider acceptable for sustained workloads.


I got the 5950x at launch for stupid money. I have no reason to part from it.

And the 7900 ! (AM5 though)

> TOTALLY okay with it consuming +600W energy

The 2019 i9 Macbook Pro has entered the chat.


CEC is just i2c which is a bus. In fact you can hook regular i2c devices up to an HDMI port and communicate with them. You’ll need a resistor and shouldn’t draw more than 50 mA.

I always assumed that it was a separate i2c bus per HDMI link and that it was the AVR’s job to handle a request from something and send the right requests to everything else.

Much like i2c, any message put on the bus is transmitted to everything on the bus.

Version 1.0 and later of the HDMI spec even mandate that you have to connect those pins across all HDMI ports on your device even if you don't do anything with them.


Okay, now I’m curious. If the pins are just connected across all ports, how does the AVR tell which CEC-speaking device is on which port? Chip select or similar pins?

Answering my own question: CEC is electrically unrelated to DDC/EDID. The EDID data tells each source its physical address, and then the devices negotiate over CEC to choose logical addresses and announce their physical addresses. This is one way to design a network, but it’s not what I would have done.

I wonder if a malfunction in this process is responsible for my AVR sometimes auto-switching to the wrong source.


Every device has a logical address that is included in messages.

Isn't DDC the I2C bus? Interesting article about that here: https://mitxela.com/projects/ddc-oled

Doh, you’re right. I’m over here getting my protocols mixed up. IIRC it is very similar though.

It's electrically similar, but not directly compatible. (if you know better than me, please let me know)

Yeah, as someone with two of these I would never let them connect to the internet. It’s chock full of ads.

I do connect them to a jailed LAN so I can control them over the network.


Monitor brightness is controlled over CEC which is just i2c. Windows most certainly supports this on an OS level.


If you want a macOS/OS X release with soul, check out Snow Leopard.


Are people doing more than serving SMB shares with their NAS’s? I feel like I’m missing out on something.


Depending on how you build it, you could run homeassistant next to your smb, which lends itself to all sorts of add-ons such as calibre-web for displaying eBooks and synchronizing progress.

Of course, gitea and surroundings, or similar ci/cd can be a fun thing to dabble with if you aren't totally over that from work.

Another fun idea is to run the rapidly developing immich as a photo storage solution. But in general, the best inspiration is the awesome-selfhosted list.


Running a home server seems relatively popular for all kinds of things. Search term "homelab" brings up a culture of people who seem largely IT-adjacent, prefer retired DC equipment, experiment with network configurations as a means of professional development and insist on running everything in VMs. Search term "self-hosted", on the other hand, seems to skew towards an enterprise of saturating a Raspberry Pi's CPU with half-hearted and unmaintained Python clones of popular SaaS products. In my experience — with both hardware and software vendoring — there is a bounty of reasonable options somewhere in between the two.


I use mine for a ton:

- Home Assistant

- GitHub backups

- Self-hosting personal projects

- File sync

- golink service

- freshrss RSS reader

- Media server

- Alternative frontends for reddit/youtube

- GitHub Actions runners

- Coder instance

- Game servers (Minecraft, Factorio)

Admittedly, this is more of a project for fun than for the end result. You could achieve all of the above by paying for services or doing something else.

https://github.com/shepherdjerred/homelab/tree/main/src/cdk8...


People want all kinds of things besides literal SMB shares:

- Other network protocols (NFS, ftp, sftp, S3)

- Apps that need bulk storage (e.g., Plex, Immich)

- Syncthing node

- SSH support (for some backup tools, for rsync, etc)

- You're already running a tiny Linux box in your home, so maybe also Pihole / VPN server / host your blog?

You've got compute attached to storage, and people find lots of ways to use that. Synology even has an app store.


I'm running Truenas Scale on my old i7 3770 with 16GB DDR3.

Obviously got a bunch of datasets just for storage, one for time machine backups over the network and then dedicated ones for apps.

I'm using for almost all my self hosted apps.

Home Assistant, Plex, Calibre, Immich, Paperless NGX, Code Server, Pi-Hole, Syncthing and a few others.

I've got Tailscale on it and I'm using a convenience package called caddy-reverse-proxy-cloudflare to make my apps available on subdomains of my personal domain (which is on CloudFlare ) by just adding labels to the docker containers.

And since I'm putting the Tailscale address as the DNS entry on CloudFlare, they can only be accessed by my devices when they're connected to Tailscale.

I think at this point what's amazing is the ease with which I can deploy new apps if I need something or want to try something.

I can have Claude whip up a docker compose and deploy it with Dockge.


I just retired my 3770 server last month, it was a good system


Unfortunate that hacker news doesn't have reply notifications but I'm curious what you did when retiring it.

Just recycle the parts? Was it your main and only server?

I have that server running Truenas, I have another PC I had built for friends and family for Plex only, and I have a third one running an ethereum validator which is the most powerful but only does that.

It's not stuff that would sell for any price i'd care to get and just throwing it away / recyling it feels bad since it still works.


I still have one powering a firewall. The only pressure to replace it is power consumption.


There's a range. A lot of people treat their NAS as their home server - torrents, downloads, media server, even containers and everything that goes with it.

I played with it as well - it's fun and rewarding and potentially optimized, but also... Can be a lot of work and hassle.

For myself when I say turn key solution, I should specify that I'm also doing more of a "right specific device for specific purpose ", so my NAS is now a storage device and nothing else.


I personally don't get what they are serving with a home NAS? Movies/Music/Family Photos is all I can think of, personally...and those don't seem that compelling to me compared to cloud.


Any substantial movie/series collection can be more over a TB and thus not cost efficient to host in the cloud.

I've been running a server with multiple TB of storage for many years and have been using an old PC in a full tower case for the purpose. I keep thinking about replacing the hardware, but it just never seems worth the money spent although it'd reduce the power usage.

I have it sharing data mainly via SSHFS and NFS (a bit of SMB for the wife's windows laptop and phone). I run NextCloud and a few *arr services (for downloading Linux ISOs) in docker.

(Currently 45TB in use on my system)

Edit: as no-one is asking, I base my system on mergerfs which was inspired by this excellent site: https://perfectmediaserver.com/02-tech-stack/mergerfs/


> and those don't seem that compelling to me compared to cloud

I tend to be cloud-antagonistic bc I value control more than ease.

Some of that is practical due to living on the Gulf coast where local infra can disappear for a week+ at a time.

Past that, I find that cloud environments have earned some mistrust because internal integrity is at risk from external pressures (shareholders, governments, other bad actors). Safeguarding from that means local storage.

To be fair to my perspective, much of my day job is restoring functionality, lost due to the endless stream of anti-user decisions by corps (and sometimes govs).


Also ebooks and software installers, but those and movies/music are my main categories.

Cloud costs would be... exorbitant. 19 TB and I'm nowhere near done ripping my movies. Dropbox would be $96/month, Backblaze $114/month, and OneDrive won't let me buy that much capacity.


And you can buy those disks for… $300 or $400 apiece, I guess? A onetime purchase.


Another use case is hobby photography. Video storage (e.x. drone footage), or keeping a big pile of RAW photos. The cloud stuff becomes impractical quickly.


I host mine locally to backup the cloud or in case Google just screws my account one day.


Not much more, but the extra bits probably differ for different people.

My (Synology) NAS also serves as a Time Machine backup and hosts an LDAP backend for my.


How does that work for you? Last I tried, any interruption during a remote Time Machine backup corrupted the entire encrypted archive, losing all backup history.


It's a secondary backup method for my household but seems to have been chugging along for years without issues for me.


I'm hosting a couple of apps in Docker on mine. (Pihole, Jellyfin, Audiobookshelf, and Bitwarden.)


I run NFS and Postgres to enable multiple-machine video editing.


Musk isn’t getting a trillion. Tesla sales would have to skyrocket.


The package doesn't say who the buyers must be. Musk could just have his other pet companies by Teslas to meet the threshold.


Imagine that they do skyrocket but the RoboCEO is in charge trillion gets distributed to shareholders.


Imagine that at least half the shares were held by a sovereign wealth fund that paid dividends to every citizen.


PopOS's Cosmic DE has this baked in. I was unsure about the feature at first, but it has proved itself useful. I wonder if this will eventually be Shirlocked into macOS.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: