Building a TrueNAS file server in 2025 on the "cheap"

A multimeter was actually used in the building of this server

It all started with a file server that died of extremely old age... and an auction for a bulk lot of hard disks.

Why build a Network Attached Storage (NAS) device (also known as a local file server... or really big hard disk located in your home/work place) when everyone is inexorably heading to "the cloud" (storage accessible globally over the internet)... willingly or otherwise? Because it can save you a lot of money, give a higher degree of IT security, and offer a better experience... why else do you think companies like Synology, Drobo, Dell, and a whole host of other companies sell products that effectively duplicate cloud storage?

Anyone will tell you that cloud storage brings a lot of convenience... but what they don't tell you... (if they've even looked into it) is that once you're looking at big storage (say 30TB or more) things get very, very expensive... and it's almost exponential in price as the size goes up... if subscriptions with suitable storage quantities are even offered at all.

The other thing they don't tell you....

In fact, if you only need to access files in the comfort of your own home or office... you can build a local NAS that not only beats cloud services in price over time (sometimes a short period of time), it utterly destroys the performance of any cloud service you care to name, saves you a fortune in subscription fees, and doesn't require a blazing fast internet connection on top of those insane storage subscription fees. That alone can pay off a NAS... either an "off the shelf" model NAS.. or a DIY job like I'm building here.

A little research for comparison (feel free to check for yourself)

Apple's iCloud service (at the time of writing) maxes out at 12TB in size, and charges $89.99 Australian per month for the privilege. (Or $1079.88 per year) I bought eight used 10TB drives for $952 (delivered), and spent another $846 on a whole new PC build to drive them... including a 10Gb SFP+ network interface, and LSI-9300 Mini-SAS card. So I break even at the 20 month mark.... Oh, and did I mention I have over 50TB of storage (over 4x the room), and my file transfer speed is at worst, 157x faster than the average home Internet connection upload speed here in Australia (because upload is almost always a small fraction of the advertised download bandwidths).... and to match a 10Gb link speed with your Internet connection in Australia, you'd need a business plan, and the cheapest (and only) provider that got back to me, said that I'd need to pay over $574.85 per month for the privilege of having a 10Gb link... and tht would only happen if I lived in Sydney/Melbourne/Brisbane (which I don't), and that I personally supplied all the infrastructure to bridge the gaps... (think at least a few more thousand) Yeah.. Australia is miles behind other countries... so what else is new?

A quick warning... DIY is not for everyone. However, great rewards can be had if you try.

Ok, so I'm an extremely well-seasoned veteran in all this RAID/NAS/Big storage stuff... and honestly... it did not go as smoothly as I'd hoped.

Nonetheless, I just thought that like minded people in need of a large amount of space for a reasonable price will find this helpful. So, if you're considering and/or willing to buy second hand gear, and plan on integrating that into some older hardware that you have laying about... there are some additional considerations that may complicate things.

Never fear, because here's the "one stop shop" for all the obstacles I came across and how I solved them with basic problem solving, and if that failed, the carefully collected and tested wisdom and experience of the kind souls who share such precious information on the web.

These issues are NOT specific to any particular NAS Operating System.. but they do correspond with used drives, and perhaps outlining the key differences in building an array with SAS (Serial Attached SCSI) drives instead of the more ubiquitous SATA drives.

Large capacity SATA drives work with pretty much every computer you may have, and because everyone uses them, there's more competition in the used auction markets, leading to much higher prices. That said.. there's some absolute bargains to be had with used SAS gear due to the lack of competition.. even if you need to work harder to get them to work.

SAS drives are built for heavy use, offer superior build quality, and are likely sold by companies that have techs that know what they're doing. That said.. it's still possible to buy a lemon... and many have years of continuous use before they're sold on the second hand market. No auction house I know of ever discloses this information, so it's a bit of a gamble.

Even if they're heavily used, used SAS drives are great for lighter-duty use. For example, a second hand NAS system is perfect when you already have a reliable NAS for your day-to-day. In this scenario, it's fine if you use a second-hand array like this as a space to periodically back that primary NAS up... say.. once a week, and the second hand NAS is off the rest of the time.

If you're building a used, daily-driver array.. It's certainly possible. However, you better configure some extra redundancy.. (definitely not RAID 5 or Z1).. you're probably better off with at least RAID6 or Z2... or more. It can take a long time (sometimes days) to add a multi-terabyte drive into the array, and that's a long time to be in an "at risk" situation.

So assess your usage style wisely, and build the system accordingly.

Keep reading if you're generally interested in the options "out there" or are in a situation that sounds a lot like this:

  1. You're building a TrueNAS or equivalent system (Unraid, XigmaNas, Rockstor, EasyNAS... amongst others).
  2. You've bought a cheap "Host bus adaptor" (also known as an "HBA") to connect multiple drives to your system... such as the 12Gb/s SAS or SATA capable disk controller like the venerable LSI 9300 series, (or it's cheaper 6Gb siblings) LSI 9200 series. LSI was later bought by Broadcom and has had numerous re-badged brand names even before the (takeover?) like Avago, Broadcom, Dell, HP, etc.
  3. You've bought the requisite breakout cable(s) featuring (SFF-8643 to SFF-8482 for connecting four SAS drives to each SFF-8643 port of the 9300-8i (check which SFF-numbered mini-SAS port your particular card uses.. there's a variety and they're not interchangeable. Of course, you also have to match the other end(s) of that cable to your hard disk type (SAS/SATA or whatever device it needs to connect to).
  4. You've got PC with enough room, power, cooling, network bandwidth, and computational "oomph" to drive a NAS with <insert your intended number and type of drives here> reliably.
  5. The requisite drives... my example is a bunch of second-hand 10TB Seagate 12Gb SAS drives (model number: ST10000NM0096... but you can use any size. SATA is much more ubiquitous, and easier... especially when bought new.. but I'll discuss the techical differences SAS drives involve, below.
  6. The ability to follow instructions to install TrueNAS or similar NAS OS.
  7. The willingness to accept that the commands you used for previous SATA arrays will not be applicable to any SAS setup you might consider.

Are your SAS drives not detected?

This was the first issue I had.

Now, I'm going to assume that you've checked that everything is correctly connected at the drive, at the HBA SAS socket, that your HBA card is firmly seated in the motherboard... and all the cables and drives are actually plugged into your power supply... that's the obvious stuff...

My story at this point...

I installed all the drives in the case, connected the breakout cables to the drives and HBA/"SAS controller" card (whichever terms you prefer), plugged the SATA power cords into the back of my SAS breakout cables (at the hard drive ends) and turned the system on.

I had already installed TrueNAS.. but when I fired up the web interface... none of my drives were detected. In fact, not one of them was even spinning up....

So I pulled out my trusty Power Supply Unit (PSU) tester, and multimeter... thinking there's some sort of power shortage, or disconnection.

Nope, EVERYTHING is getting power and well within voltage range... on EVERY pin of every power plug of every SATA cable connection (which plugs into a SAS adaptor included with the HBA SAS breakout cables). Even without a data cable connection, most drives spin up... hmmmm.

Did I need to update/flash the HBA firmware? (probably should) Was the card even functional? (Might've gotten a dud).. perhaps the breakout cables were damaged or faulty... or even ALL the drives might have a problem... (seems unlikely..but possible).

So I did a bit of digging....

Turns out... that certain combinations of SAS drives, and older computer PSUs have a slightly different opinion of what to do with the (completely optional.. some might even say, unnecessary) 3.3V rail. Despite all the power being provided faithfully....

Image showing problematic wiring
The key point to this is that your typical SATA power cord from a PSU can cause issues with some SAS drives, and steps can be taken to fix that. It's not necessarily that a drive is bad, or the data connection is broken. So check to see if the drives are actually spinning first, before mangling your connection. The rotation drives will vibrate and warm up due to the spinning disk. Give it 10 minutes of "on time", touch the drive (not on any circuitry/sockets... obviously) and you'll know if it's still at ambient or not. Not spinning up? Well then, let's look at one potential cause.

Option 1: Tack a 4 wire SATA power extension cord into the mix!

If your PSU came with 5 wire SATA power cords.. and you don't want to cut/maim/destroy the connection of that 3.3V wire that tells your SAS drives to stop powering up (I think it's an emergency protection system)....

...Then you can "adapt-out" that pesky 3.3v wire by using the 4 wire/pin extension cord between the PSU's SATA power cord and the back of the of the SAS power plug on each drive. The extension cords/splitters look like the images below.

Use 4 wire sata extension cable to make SAS drives start up.
The advantage of this setup is that you're merely adding a little cable on the end of the power cords you already have in place. The extension cables are cheap, and it's a very safe procedure to install.

Option 2: Use your PSU's Molex cable + SATA adaptors instead!

A long time ago, all IDE hard disks and optical drives (CD/DVD drives) were powered with Molex connectors. They've fallen out of favour a bit.. now that floppy and optical drives are all but gone... and even hard disks use the SATA connector... if they're not NVMe/M.2 devices. However, most PSUs still come with these Molex cables for additional support and configurability.

Why use an old standard?

Because it never carried that problematic 3.3V line. It supplies the required 12V and 5V lines, so you're not going to have an issue with SAS drives turning themselves off. Ditch that SATA cord from your PSU and grab the venerable Molex...

...You will still need to buy (or scrounge) enough a Molex to SATA adaptors, or better yet, splitters to connect all of your drives.. but they're pretty cheap! No destruction, everything as it should be, no warranty woes or electrical knowledge required.

Use the molex PSU cable with Molex to SATA adaptors
While this method requires using your PSU's Molex cable (which you probably already have) the Molex cable by default CANNOT carry that 3.3V wire.. It just doesn't have it. As such you can buy ANY Molex to SATA adaptors and stick them between the PSU's Molex power cord, and each drive. Again, no hacking, no damage, just a little extra wiring. It also helps if you don't have many SATA ports on your PSU's SATA power cord left. :-)
Instead of modding each drive connection, unify the mod
I don't know of anyone who has done things this way (I'm sure I'm not the first). However, this is how I chose to solve my SAS drive power issue. Instead of modding each and every drive end, I simply pulled the 3.3V pin out of all three SATA power cable plugs where they insert into the PSU. No adaptors needed... although some basic (continuity checking) skills with a multimeter to identify the correct pin to remove, and some knowledge on how pins are inserted into the plug (and thus removed) is helpful :~)

Problem 2: Drives reporting with zero (or otherwise erroneous) capacity

Now, I'm not talking about the fact that the drives are advertised and sold at decimal capacity (say 10TB, which is 10,000,000,000,000 bytes, which is roughly 9% smaller than the equivalent true binary capacity, or 10TiB which is 10,995,116,277,760 bytes).. I'm talking about differences in the reported drive capacity that make no sense whatsoever. Like 30% or even 100% less space.. perhaps even more space.

Once I got the drives to spin up by sorting my 3.3V power problem, I went back to the TrueNAS web interface, saw the drives being detected, and after a brief period of elation...

Then I saw that the drives said they all had zero bytes capacity....

Oh no, not another issue.... back to work...

TrueNAS drive list showing that the drives have ZERO bytes capacity
Server farms and big business use a variety of hardware.. not all of it is home PC compatible as they're delivered from the used market. Block/sector sizes are a key variable when calculating a drive's total size. If it's wrong... then you get odd results like these 10TB drives pretending they have zero storage. It's not uncommon to see used drives with server block/sector sizes like 520/528 bytes that server farm systems expect (instead of the usual 512 most home users operate at). Now, your particular drive model may have specific requirements so you may need to look up the spec sheet of your model. The specification we're looking for here may be called a number of things... something like: "block size", "sector size", my 10TB drives had it labelled as "512e Sector Size", but whatever the term used... the most common default is 512, so if you can't find it, try 512 if it isn't already set that way. Changing the sector size is easily solvable with a specific type of reformatting... but please note that it will likely take hours if not days (particularly with larger magnetic drives). If the formatting process is interrupted for any reason... you restart from scratch. So keep pets, kids, or anyone accident-prone away from the system during this time. Ready to begin? Let's start with checking to see what the block/sector size actually on each of your drives is configured to...

How to save yourself both time, and frustration...

If you're getting the above diagram, where all your drives are reporting a size they definitely shouldn't be... I'm going to be absolutely clear here...

The web interface is a hinderance in this case. Stick a monitor on the box, plug in a keyboard.

Once boot up is finished, (or the machine is still running) you should be prompted with a menu with 9 or so items like the image below....

Initial menu on TrueNAS's local console
I find selecting the "Open Linux Shell" by pressing the 7 key, then hitting enter the most convenient point for completing the following steps, and most powerful. You'll be brought to the command line. If the last character on the screen is a dollar sign ($) then you're an everyday user.. and you need to be an admin/root user in order to erase large tracts of storage space (as it should be) so I will assume you know the admin password that you configured when installing TrueNAS (or other NAS OS)... you'll need it for the following steps.
authenticating as super user
As described in the image, you need to log in as a user with the authority to erase hard disks.

Type the following command (without quotes) "sudo zsh" and hit enter, then type the admin/root password that you configured during the OS installation.

If that doesn't work, your system may use a different shell environment (like BASH) so try the following command:

"sudo bash" (without quotes of course),

... and then enter the admin password and hit enter a second time. Entering the password confuses new Linux/Unix users... because it will of course, NOT show up on you screen for security reasons... no your keyboard is not broken, and the system has not crashed. Hit Ctrl+C to cancel if you're concerned about it!

Finding all the hard disks, and making sure that you erase the correct ones.

If you logged in as administrator/root (different names for the same thing) successfully, then your command prompt now ends with a hash (#) that means you can do anything you want to the system.. so you better know what you're doing. If you ever leave the system to grab a drink or something, type "exit" (without quotes) followed by the enter key to return to normal user status...

NOTE: just don't do that when formatting drives as described below... as that will log you out and stop the formatting process!

But first...

List the hard drives that the system knows about... (usually the ones that are correctly connected and running)

Now, everyone has their preferred way of doing this, but I like to start with the "lsblk" command (that's LSBLK in lower case), and as usual, hit enter. I like lsblk because it works with both SATA and SAS drives..

It should provide something like this...

Image showing the output of lsblk command. Helps to identify drives before we erase the wrong ones.

There's a lot of useful information here, so let's break it down into the good drives, the bad drives, and the drives that you should really leave alone. :-)

If you're using nothing but NVME or all SATA/SAS drives, things can get pretty confusing... all will be listed as they're connected. You'll be reliant on the disk size for identifying which drives are for your storage array, and which ones are running your operating system. (They're often separate, and hopefully different sizes).

The left column is the drive identifier. SATA and SAS drives will start with sd (for "serial disk", because SATA and SAS are serial interfaces) and then the system starts assigning letters to each individual drive a, b, c.... the ordering isn't always what you'd think, so be careful here.

If any NVMe drives are connected, they use a different interface... they'll get labelled as nvmeXnYpZ (where X is the NVME bus/socket ID number, Y is the disk ID number, and Z is the ID number of individual partitions on the disk.. there's usually 3 partitions at least, each of differing sizes, the tiny first partition is usually for the boot partition (it gets enough running to start the other two partitions), the next, slightly bigger partition is usually the swap partition (helps to run the system efficiently), and rest of the drive is usually taken up by the third partition that stores your OS. ALL THREE PARTITIONS ARE CRUCIAL... don't mess with these.

If you see a listing like mine, chances are the NVMe is the system drive (which it is for me) so you REALLY don't want to format the drive/partitions marked in purple. Also, the green drives are reporting their size correctly.. so they probably don't need a reformat. You can choose to do so... if there's nothing important on them... or you just got them and want to make sure you're "starting fresh"... Well, as fresh as you can be... if you're installing used drives.

The drives sda-sdh are my intended NAS data drives. However, the green drives labelled as 9.1TiB (the actual size of 10TB) are good and don't need formatting. However, if you've got eight 10TB drives... and four are labelled as ZERO bytes... or something way off from the actual size of your drives.. they will need reformatting.

Don't worry, we'll be confirming both what drive is where... and the underlying issue to see if formatting will actually help in the next command. :~)

Introducing the "smartctl" command..

Many hard disks have an in-built health monitoring system called S.M.A.R.T. (self monitoring, analysis and reporting technology). If you want to know exactly what's going on with a running drive, go straight to the "horses mouth" and use the smartctl command.

Details are everything and so I use the -a flag for "all information" then specify the drive I want to know about. In my case, let's say I want to see that problematic sda drive. The command would be:

smartctl -a /dev/sda

If I wanted to inspect sdb, then I'd type the command with:

smartctl -a /dev/sdb

(yep, I just changed the last letter to suit the corresponding drive.. can you guess the command for the rest of the drives? I hope so!)

The NVMe drives are a little different. Ignoring the partitions is necessary in the long winded NVMe identifier! To do that, just stop before the last "p".

So, in my case, I'd specify the drive itself by typing:

smartctl -a /dev/nvme0n1

Compare the drive names to the lsblk output above, and of course, adapt it to the output of your own system... as yours WILL be almost certainly be different.

Let's look at the output

The four most important parts are:

  1. Is this the right drive? (Use the user capacity, and the serial number to verify that).
  2. If you're drive isn't "the right size" (Nothing like the user capacity) What is the Logical block size? Most of the time, this should be 512, but refer to your drive manufacturer specification sheet for verification. (look that up online using the product number here... or the model number on the drive's label).
  3. What is the state of the drive?
    1. Is "SMART Health Status" Ok (or not)
    2. What's the temperature? (you probably don't want it over 50 degrees)
    3. How much use (Accumulated power on time) has the drive had?
  4. Any notable errors in the report... particularly uncorrected ones.

Have a look at my output below....

This smartctl output shows an incorrect block size, so this drive need reformatting.
This shows the output of smartctl. It indicates an erroneous block size of 520 instead of 512.. which is why TrueNAS thinks this drive has a zero byte capacity. Reformatting the drive with the correct block size will hopefully fix this issue. Remember that you'll have to inspect each drive with the same command and appropriate drive letters/IDs. You can get them using "lsblk" like above, or in the TrueNAS disk page in the web interface (also shown above). If you TrueNAS system is wired up, it's likely running. Access it as you would normally. :-)

Formatting a drive to fix your disks..

Before we begin....

  • Formatting will destroy all data on your drives... so make sure you have an accurate list, including drive "names".. you don't want to erase the wrong ones.
  • Formatting is a slow process on rotational/magnetic drives, often taking hours if not days for block/sector resizing.
  • Formatting to repair the block sizes requires you to know the appropriate size of each block/sector for your particular model of drive. However, if in doubt, try 512.
  • Because the TrueNAS web interface logs out after a set time. this stops the format and you have to start again. I much prefer the console with a keyboard and monitor attached for this reason. It also allows you to...
  • Format multiple drives at once (saving hours or even days of delays) by either using space-then-ampersand-ended commands (" &" without quotes) in one console... this jumbles up all the progress percentages as it's updated minutely for each drive without the matching drive name.... but works... even if you don't know which drive is at each progress stage. A better method is issuing each drive's formatting command in it's own console without the ampersand ending.

How to format a drive to the correct block/sector size:

Still with the super user (hashed/#) command prompt, the correct way to format a SAS drive is with the following command:

sg_format -v --format --size=512 /dev/sda

Let's break this command down so you know what's going on...

  • sg_format (a tool that formats SCSI/SAS drives)
  • -v for verbose output (since we like percentage progress reports, and detailed error messages)
  • --format --size=512 (telling the tool to reformat the drive and change the block/sector size to 512)
  • /dev/sda (specifying which drive to format)

Formatting multiple drives at once, one formatting drive per terminal...

  1. Run the command for your particular drive (make sure you get this right) which will look at least similar to the command above and hit enter.
  2. It will give you about 5-10 seconds grace period, to cancel with Ctrl + C combination without starting the format, leave it alone and it will automatically start the format.
  3. You're currently in the default console (designated by TTY1 or something similar), which you reach by typing Alt + F1. (Just two keys, Alt and F1, the plus isn't needed, it's just the conventional way to write keyboard shortcuts like this). However you have 12 function buttons on most keyboards so you can format 12 drives individually). For example, switching to them involves:
    • TTY1 = Alt + F1
    • TTY2 = Alt + F2
    • TTY3 = Alt + F3
    • TTY4 = Alt + F4... you get the idea
  4. Simply switch between the virtual terminals (the TTYx) and run the format command for a different drive on each. I recommend using TTY1 for sda, and TTY2 for sdb... etc) just so it's easier to keep straight in the mind. :-)
  5. Feel free to switch between the terminals to monitor progress (Alt F3 for drive sdc for instance... there will be a list of percentages, the highest one is the most recent progress report. My system polled the progress for each drive every minute or so. Yours may be a little different.
  6. Time to wait however long it takes... it might be about 26 hours for my 10TB SAS drives... go, live your life, hug a loved one, drink a tasty beverage, or read a good book... whatever makes you happy. A watched format job does not complete any faster, I assure you.
  7. Once they're all complete, either run "lsblk" and you should see the capacity reporting correctly... or you can do the same with the disks page of the web interface.

If your system is like mine, you should now have all the drives being detected and ready for service. From here, setup your TrueNAS (or equivalent) according to the official instructions. I won't rehash them here.

Final report...

So the formatting fixed the issue, I created the pool and shares, added users, and defined the access to them. It's working very well!

I taped up those loose cables, and made a few mods to the case to accept the rather-large heatsink I placed on the CPU.

I then synced the data from the other servers (both primary and secondary) and wow was it a lot faster than doing it over the cloud.

Ok, this would be much faster if I used eight solid state drives, and faster again if I used all NVMe drives.. but they aren't up to 10TB yet... and considering how this array is a multimedia "scratch space", that would quickly burn through the finite SSD rewrite limits... which makes them less than ideal for frequently changing data.

How's the speed?

I was getting upwards of 580MiB/s minimum transfer and my average was somewhere around 780MiB/s. (that's not bits, but bytes per second) so it still took about 11 hours to sync a complete backup of over 32 Terabytes of "hot data", the first time, and I got that down to about 7 hours with compression the second time. That said this involved a lot of little files each time, instead of large tracts of contiguous 4K video footage.. so there's some overhead losses there. Still, roughly getting 70% of the maximum theoretical bandwidth, considering the age and shared workload of the source server... is pretty good.

Intel might not be loved elsewhere, but it's an absolute bargain for me!

The Intel Gen 12 (13600K) CPU is wildly overpowered for this purpose, and never came close to over voltage or even raised temperatures... from idle. I honestly have no complaints whatsoever about using the much maligned chip because it works for this purpose, it was cheap, and will probably serve me well for at least a decade with the fixes firmly installed. I'm not gaming on this system, but I may run some Virtual Machines (VMs) should I feel the need to do so. That said, the thermal losses suggested that I needed a bigger CPU cooler... but the way I drive it... it seems redundant now.

How does it stack up against an "off the shelf" NAS?

All in all, this is WAY cheaper than a Synology of equal spec, is WAY more powerful than the options at this price point as well. Ok, it's not as pretty, and I used an old server case, and power supply. Building it from absolute scratch would have been expensive... especially considering how few PC cases can house ten (or even six) 3.5" drives these days.

Without my old case... I'd honestly consider designing my own modular case system from scratch and 3D printing it or salvaging an old server to accomplish this seemingly simple, but challenging task. (Or I could just build/3D print multiple 8 drive bay towers. Stick them in a separate enclosure, and run the cabling out of a convenient hole in a new PC case. It's not an insurmountable problem by any means!

How does it compare to cloud services? Here's a lesson from COVID.

If I combine the cost of a business internet connection, and cloud storage of equal spec...

I break even to my NAS build expense here in mere weeks. Not even a month. Everything after that is all profit.

(UPDATE: IT'S BEEN RUNNING NOW FOR 6 MONTHS WITHOUT ISSUE!)

If it blows up in under a year, I'll simply fix it and restore the data from my other NAS systems if that is even needed. Which for a household hosting two small businesses, means we can run extremely leanly when needed.

Where's this Covid lesson?

Ren's pro photographer/videographer/drone operator/virtual tour/mapping service are just too demanding for most cloud services that cater to small businesses. There seems to be a gap at this price point both in price and performance. They either cater to private everyday users (like Apple) where the peformance is limited or bigger mid-sized businesses and beyond, where the price is prohibitive.

Yet, while other photographers closed their doors during COVID.... Ren's business survived during the multiple COVID shut downs. Not because she was a better or more famous photographer... but because she brought her ongoing business expenses to near zero and promoted her diverse skill sets.

No business internet plan or extra mobile plan, no work cloud subscriptions, no reliance on rented software, equipment, cars, or office spaces. She simply cut all the fat out of her life and cut her mortgage down by 20 years. Meanwhile, Ren took on a wide variety of jobs. Ren could still do her graphic design work remotely when photography work was slow over the shutdowns, and her photography work was diverse enough (real estate virtual tours, product photography, etc) actually took off during the harder times. Meanwhile, those fancy, high-priced wedding photographers had limited their skill sets and were basically unemployable for other tasks. They had little in their portfolios to attract clients outside of their "wheel houses".

Beware ongoing costs!

I'm not against cloud services or any useful service for that matter. They're great when work is plentiful and no "unforseen events" occur. That said, when times are tough, I personally know several other photographers who closed their business down... They simply failed to consider that any contract means you're "outsourcing" your daily essentials... which don't stop because your work has been "slow", and have no sympathy for hardship, even if you've been a loyal customer for years.

So far too many photographers were paying these fees, hand over fist without any income... and most were slow to realise that this situation wasn't going to be solved quickly... and discovered cancellation fees can be even more painful.

Now ask yourself.... "How do they operate once they've cancelled the service?" Few small businesses had good answers..

So what if Ren and I need to send large data to a cloud?

If we must send data to the cloud, we can do so to the recipient's cloud account... it's just slow. The only reason to do this is to pass any post-processing on subcontractors as a job requires... or present the finished (and often compressed) products to our clients.

Second hand server is no substitute for a cloud service!

If the cloud aspect is critical to collaborative efforts over distance... you know that. I'm not saying that cloud accounts aren't useful. But for sole traders, or private citizens with higher needs, more limited budgets and collaborate little... it's a poor fit.

Would I run a fortune 500 company on this second hand NAS? Of course not. However, it'll serve as a backup to any sized business... and a small business might well get away with a system like this for years! Especially if there is redundancy and other servers covering "unforseen events".

I can say that as an IT guy for some ahem... decades now, the number of companies who run a "temporary" rig for years on end, is the norm not the exception... However much I may cringe at it when it's clearly not enough for the situation.

Over a decade of professional cost-cutting university IT support and systems engineering has not been lost on me. I hope it inspires you to save your money wherever you're able and whatever your IT skills level might be.

All the best in your storage endeavours.

Ham.