Boot-pool degraded, SMART looks fine?

GrimmReaperNL

Explorer
Joined
Jan 24, 2022
Messages
58
Hi Everybody,
bernie-support.jpg
Earlier this week my boot-pool suddenly went 'degraded'. Running zpool status -v showed the pool had both read and write errors and one of the drives didn't show. In 'manage disks' it still showed so I tried to SMART test , but those came back failed.
I pulled the ssd from the server, hooked it up to my pc with one of those sata to usb thingies and looked up the smart values in Speccy. It showed all green/good.
Plugged it back into the server, cleared it with zpool clear. it resilvered, ran a short smart test. all looked good.

Then the pool went degraded again. So I pulled it again, hooked it up to my pc again and used 'diskcheckup' to run a short and long smart test.
They both came back with no issues. I shrugged, remembered I read somewhere sometimes a sata cable can cause issues.
So I plugged the drive back into the server with a different cable, same port. Cleared zpool, resilvered, ran a short smart test. again, all good.

Now today it's gone degraded again. @joeschmuck's multi report script email shows this right now. Giving both smart and zpool information. (I cut all the smart info except for the 'failed' drive):
sdj is supposed to be the bad drive. I mean, yeah, there's now a SATA_Phy_Error_Count and CRC_Error_Count. But I feel like that's likely from just pulling the drive from the server without powering it down. As you can't 'offline' drives in your boot-pool.

Does anyone have any insight in what to do now? As SMART looks okay, I can't really go back to the retailer (it's a pretty new drive) and say it's faulty.

Thanks for your wisdom.
Code:
Multi-Report v2.0.10 dtd:2023-03-06 (TrueNAS Scale 22.12.1)
Report Run 11-Mar-2023 @ 16:28:28.53

*ZPool/ZFS Status Report Summary
Pool Name     Status     Pool Size     Free Space     Used Space     Frag     Read Errors     Write Errors     Cksum Errors     Scrub Repaired Bytes     Scrub Errors     Last Scrub Age     Last Scrub Duration
TrueNAS     ONLINE     102T     46.0T     55.6T (54%)     2%     0     0     0     0     0     30     15:50:43
boot-pool     DEGRADED     107.73G     105G     2.73G (2%)     2%     0     22     0     ---     ---     Resilvered     ---
nvme-pool     ONLINE     899.70G     874G     25.7G (2%)     0%     0     0     0     0     0     7     00:00:25

*Data obtained from zpool and zfs commands.

Spinning Rust Summary Report
Device ID     Serial Number     Model Number     HDD Capacity     RPM     SMART Status     Curr Temp     Temp Min     Temp Max     Power On Time     Start Stop Count     Load Cycle Count     Spin Retry Count     Re-alloc Sects     Re-alloc Evnt     Curr Pend Sects     Offl Unc Sects     UDMA CRC Error     Read Error Rate     Seek Error Rate     Multi Zone Error     He Level     Last Test Age     Last Test Type
/dev/sda     7130A0CVFVJG     TOSHIBA MG08ACA14TE     14.0TB     7200     PASSED     30*C     24*C     35*C     8769     50     56     0     0     0     0     0     0     0     0     ---     ---     0     Short
/dev/sdb     7130A0D6FVJG     TOSHIBA MG08ACA14TE     14.0TB     7200     PASSED     30*C     24*C     35*C     8770     50     56     0     0     0     0     0     0     0     0     ---     ---     0     Short
/dev/sdc     7130A09EFVJG     TOSHIBA MG08ACA14TE     14.0TB     7200     PASSED     30*C     23*C     34*C     8769     50     55     0     0     0     0     0     0     0     0     ---     ---     0     Short
/dev/sdd     7130A0CRFVJG     TOSHIBA MG08ACA14TE     14.0TB     7200     PASSED     28*C     23*C     33*C     8759     30     34     0     0     0     0     0     0     0     0     ---     ---     0     Short
/dev/sde     42U0A0MYF94G     TOSHIBA MG07ACA14TE     14.0TB     7200     PASSED     31*C     25*C     35*C     1241     7     7     0     0     0     0     0     0     0     0     ---     ---     0     Short
/dev/sdf     7130A0BEFVJG     TOSHIBA MG08ACA14TE     14.0TB     7200     PASSED     30*C     23*C     34*C     8770     50     54     0     0     0     0     0     0     0     0     ---     ---     0     Short
/dev/sdg     7130A0CBFVJG     TOSHIBA MG08ACA14TE     14.0TB     7200     PASSED     31*C     23*C     35*C     8769     50     55     0     0(3)     0(1)     0     0     0     0     0     ---     ---     0     Short
/dev/sdi     7130A0BWFVJG     TOSHIBA MG08ACA14TE     14.0TB     7200     PASSED     31*C     23*C     36*C     8769     50     56     0     0     0     0     0     0     0     0     ---     ---     0     Short
/dev/sdk     7130A05WFVJG     TOSHIBA MG08ACA14TE     14.0TB     7200     PASSED     30*C     22*C     34*C     8771     52     56     0     0     0     0     0     0     0     0     ---     ---     0     Short
/dev/sdl     7130A037FVJG     TOSHIBA MG08ACA14TE     14.0TB     7200     PASSED     28*C     22*C     32*C     8771     52     57     0     0(3)     0(2)     0     0     0     0     0     ---     ---     0     Short
/dev/sdm     42U0A0BCF94G     TOSHIBA MG07ACA14TE     14.0TB     7200     PASSED     32*C     23*C     35*C     4415     8     8     0     0     0     0     0     0     0     0     ---     ---     0     Short



SSD Summary Report
Device ID     Serial Number     Model Number     HDD Capacity     SMART Status     Curr Temp     Temp Min     Temp Max     Power On Time     Wear Level     Re-alloc Sects     Re-alloc Evnt     Curr Pend Sects     Offl Unc Sects     UDMA CRC Error     Last Test Age     Last Test Type
/dev/sdh     50026B7381A308DD     KINGSTON SA400S37120G     120GB     PASSED     26*C     19*C     37*C     329     99     ---     0     ---     ---     0     0     Short
/dev/sdj     50026B7381A31481     KINGSTON SA400S37120G     120GB     PASSED     28*C     21*C     38*C     311     99     ---     0     ---     ---     0     0     Short



NVMe Summary Report
Device ID     Serial Number     Model Number     HDD Capacity     SMART Status     Critical Warning     Curr Temp     Power On Time     Wear Level
/dev/nvme0n1     2235E65C03BB     CT1000P3PSSD8     1.00TB     PASSED     GOOD     31*C     327     100

Multi-Report Text Section

External Configuration File in use dtd:2023-01-22

Statistical Export Log Located:
/mnt/TrueNAS/scripts/statisticalsmartdata.csv
Emailed every: Mon

CRITICAL LOG FILE
boot-pool - Scrub Online Error


END

WARNING LOG FILE
boot-pool - Scrub Write Errors


END

########## ZPool status report for TrueNAS ##########
  pool: TrueNAS
 state: ONLINE
  scan: scrub repaired 0B in 15:50:43 with 0 errors on Thu Feb  9 12:25:26 2023
config:

    NAME                                      STATE     READ WRITE CKSUM
    TrueNAS                                   ONLINE       0     0     0
      raidz3-0                                ONLINE       0     0     0
        0a38cf8b-a583-11ec-9714-3cecef8c44fa  ONLINE       0     0     0
        09737924-a583-11ec-9714-3cecef8c44fa  ONLINE       0     0     0
        d9fd11cf-9761-11ed-abfe-3cecef8c44fa  ONLINE       0     0     0
        0a014274-a583-11ec-9714-3cecef8c44fa  ONLINE       0     0     0
        0b3f1b68-a583-11ec-9714-3cecef8c44fa  ONLINE       0     0     0
        0ae27c9a-a583-11ec-9714-3cecef8c44fa  ONLINE       0     0     0
        56f3655d-2ed5-11ed-ab7a-3cecef8c44fa  ONLINE       0     0     0
        0a775747-a583-11ec-9714-3cecef8c44fa  ONLINE       0     0     0
        0dcc0444-a583-11ec-9714-3cecef8c44fa  ONLINE       0     0     0
        0a474b61-a583-11ec-9714-3cecef8c44fa  ONLINE       0     0     0
        0cf68994-a583-11ec-9714-3cecef8c44fa  ONLINE       0     0     0

errors: No known data errors

Drives for this pool are listed below:
0a38cf8b-a583-11ec-9714-3cecef8c44fa -> sdl2
09737924-a583-11ec-9714-3cecef8c44fa -> sdb2
d9fd11cf-9761-11ed-abfe-3cecef8c44fa -> sde2
0a014274-a583-11ec-9714-3cecef8c44fa -> sdf2
0b3f1b68-a583-11ec-9714-3cecef8c44fa -> sdi2
0ae27c9a-a583-11ec-9714-3cecef8c44fa -> sdd2
56f3655d-2ed5-11ed-ab7a-3cecef8c44fa -> sdm2
0a775747-a583-11ec-9714-3cecef8c44fa -> sdk2
0dcc0444-a583-11ec-9714-3cecef8c44fa -> sdc2
0a474b61-a583-11ec-9714-3cecef8c44fa -> sda2
0cf68994-a583-11ec-9714-3cecef8c44fa -> sdg2


########## ZPool status report for boot-pool ##########
  pool: boot-pool
 state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
    repaired.
  scan: resilvered 28.2M in 00:00:00 with 0 errors on Thu Mar  9 00:55:06 2023
config:

    NAME        STATE     READ WRITE CKSUM
    boot-pool   DEGRADED     0     0     0
      mirror-0  DEGRADED     0     0     0
        sdh3    ONLINE       0     0     0
        sdj3    FAULTED      0    22     0  too many errors

errors: No known data errors


########## ZPool status report for nvme-pool ##########
  pool: nvme-pool
 state: ONLINE
  scan: scrub repaired 0B in 00:00:25 with 0 errors on Sun Mar  5 00:00:26 2023
config:

    NAME                                    STATE     READ WRITE CKSUM
    nvme-pool                               ONLINE       0     0     0
      9516f97e-fb7c-4c3a-9b54-3d949db386db  ONLINE       0     0     0

errors: No known data errors

Drives for this pool are listed below:
9516f97e-fb7c-4c3a-9b54-3d949db386db -> nvme0n1p2


########## SMART status report for sdj drive (Phison Driven SSDs : 50026B7381A31481) ##########

SMART overall-health self-assessment test result: PASSED

ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x0032   100   100   000    Old_age   Always       -       100
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       311
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       11
148 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      -       0
149 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      -       0
167 Write_Protect_Mode      0x0000   100   100   000    Old_age   Offline      -       0
168 SATA_Phy_Error_Count    0x0012   100   100   000    Old_age   Always       -       1
169 Bad_Block_Rate          0x0000   100   100   000    Old_age   Offline      -       0
170 Bad_Blk_Ct_Lat/Erl      0x0000   100   100   010    Old_age   Offline      -       0/0
172 Erase_Fail_Count        0x0032   100   100   000    Old_age   Always       -       0
173 MaxAvgErase_Ct          0x0000   100   100   000    Old_age   Offline      -       0
181 Program_Fail_Count      0x0032   100   100   000    Old_age   Always       -       0
182 Erase_Fail_Count        0x0000   100   100   000    Old_age   Offline      -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
192 Unsafe_Shutdown_Count   0x0012   100   100   000    Old_age   Always       -       8
194 Temperature_Celsius     0x0022   029   038   000    Old_age   Always       -       29 (Min/Max 21/38)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       0
199 SATA_CRC_Error_Count    0x0032   100   100   000    Old_age   Always       -       0
218 CRC_Error_Count         0x0032   100   100   000    Old_age   Always       -       1
231 SSD_Life_Left           0x0000   099   099   000    Old_age   Offline      -       99
233 Flash_Writes_GiB        0x0032   100   100   000    Old_age   Always       -       284
241 Lifetime_Writes_GiB     0x0032   100   100   000    Old_age   Always       -       310
242 Lifetime_Reads_GiB      0x0032   100   100   000    Old_age   Always       -       36
244 Average_Erase_Count     0x0000   100   100   000    Old_age   Offline      -       10
245 Max_Erase_Count         0x0000   100   100   000    Old_age   Offline      -       21
246 Total_Erase_Count       0x0000   100   100   000    Old_age   Offline      -       10399

Warning: ATA error count 0 inconsistent with error log pointer 1

ATA Error Count: 0
    CR = Command Register [HEX]
    FR = Features Register [HEX]
    SC = Sector Count Register [HEX]
    SN = Sector Number Register [HEX]
    CL = Cylinder Low Register [HEX]
    CH = Cylinder High Register [HEX]
    DH = Device/Head Register [HEX]
    DC = Device Command Register [HEX]
    ER = Error register [HEX]
    ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Error -4 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  00 00 00 00 00 00 00

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  b0 d0 01 00 4f c2 00 08      00:00:00.000  SMART READ DATA
  b0 d1 01 01 4f c2 00 08      00:00:00.000  SMART READ ATTRIBUTE THRESHOLDS [OBS-4]
  b0 da 00 00 4f c2 00 08      00:00:00.000  SMART RETURN STATUS
  b0 d5 01 00 4f c2 00 08      00:00:00.000  SMART READ LOG
  b0 d5 01 01 4f c2 00 08      00:00:00.000  SMART READ LOG

Num Test_Description  (Most recent Short & Extended Tests - Listed by test number)
# 1 Short offline Completed without error 00% 298 -
# 5 Extended offline Completed without error 00% 246 -


SCT Error Recovery Control:  SCT Commands not supported


End of data section


        multi_report_config.txt
20K View Download
 
Last edited by a moderator:

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Did you ever get this fixed?
 

GrimmReaperNL

Explorer
Joined
Jan 24, 2022
Messages
58
Hi Joe, Fixed not so much, but resolved? As smart tests are still coming in looking fine, I had cleared the pool error.
It has been running without issue since.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Well that is good and hopefully it will never come back. It would be nice to have a smoking gun.
 

GrimmReaperNL

Explorer
Joined
Jan 24, 2022
Messages
58
So I've been trying a bunch of stuff, but my boot-pool keeps getting degraded. and not in the fun consensual kind.
I have a bunch of smart tests/reports that show no issues, but zfs keeps getting read and write errors.
It errors, I pull the drive, test on a different pc (fine), put it back in truenas, test is (fine), reset pool errors, runs fine for a couple days, boom zfs errors.

I've done short and extended smart tests on a different machine.
I've tried different sata cables.
I've tried different sata power cables.
I've since reinstalled the os.

I don't know what else to do. Does anyone have any suggestions? They'd be more than welcome.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994

GrimmReaperNL

Explorer
Joined
Jan 24, 2022
Messages
58
Did you try different SATA ports on the motherboard?
Same drive having the errors?
Did you try different SATA ports on the motherboard?
Q
Agreed.

I would lean towards a firmware update: https://media.kingston.com/support/downloads/SA400_SQ500_SBFKP1C5_RN.pdf
Your current firmware is S3E00100. Take a look, make sure I'm correct about the drives you are running.
Thanks for the suggestions. I had clear the errors and it has been stable for now. It'll probably shit itself again. I'll try a different port then.

It has been the same drive the last couple of times. but I've also messed with its mirror drive, which I know has also shit itself at one point or another.
I've also had both faulted at the same time and the nas became unreachable.

I'll update back if/when I have more info.
I'll also look into the firmware thing you mentioned joe.
 
Joined
Jun 15, 2022
Messages
674
It's looking like it might be a cable issue, though for reference S.M.A.R.T. over USB is a bit sketchy.

Instead run the following on the server:
smartctl --xall /dev/sdj | less

You can also pipe the whole lot of drives out to one file (change the outfile location to something that makes sense instead of /mnt/jumpdrv/log/ maybe /SMARTlogs/ or whatever:

Code:
#!/bin/bash
# record S.M.A.R.T. logs.

outfile="/mnt/jumpdrv/log/`date +%F_%H%M%S`_smart.txt"

echo "Logging S.M.A.R.T. information for drives:"
for drive in /dev/sd?
do
    echo "  $drive"

    echo "======================================================================" >> $outfile
    echo "START LOG FOR: $drive" >> $outfile
    echo "command: smartctl --xall $drive" >> $outfile
    echo "======================================================================" >> $outfile

    smartctl --xall "$drive" >> $outfile

    echo "----------------------------------------------------------------------" >> $outfile
    echo "END LOG FOR: $drive" >> $outfile
    echo "----------------------------------------------------------------------" >> $outfile

done
echo "Done."
 

GrimmReaperNL

Explorer
Joined
Jan 24, 2022
Messages
58
Hooked the SSD straight up to my pc as suggested. Took a bit to figure out why kingston ssd manager didn't 'see' the drive even though its events page listed it (sata controller in bios was set to raid instead of ahci).
So now the manager sees the drive, but says there's no firmware update available. ugh. Hopping into support chat tomorrow.
Managers smart data all still looks fine though.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994

GrimmReaperNL

Explorer
Joined
Jan 24, 2022
Messages
58

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
In the nas, or via usb adapter.
When it was failing. If it was on a USB adapter then that really could explain the failure as some USB adapters are just not great.
 

GrimmReaperNL

Explorer
Joined
Jan 24, 2022
Messages
58
When it was failing. If it was on a USB adapter then that really could explain the failure as some USB adapters are just not great.
oh, no. when it was failing it was in the nas. straight sata connecting to motherboard.
right now I'm running on one of two ssd's in the boot-pool.

And I hope yesterday was an april-fools joke. My storage pool all of a sudden had 3-4 read/write errors on multiple of the hdd's.
########## ZPool status report for TrueNAS ########## pool: TrueNAS state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P scan: scrub repaired 0B in 1 days 16:31:53 with 0 errors on Tue Mar 21 08:03:33 2023 config: NAME STATE READ WRITE CKSUM TrueNAS ONLINE 0 0 0 raidz3-0 ONLINE 3 0 0 0a38cf8b-a583-11ec-9714-3cecef8c44fa ONLINE 0 0 0 09737924-a583-11ec-9714-3cecef8c44fa ONLINE 3 4 0 d9fd11cf-9761-11ed-abfe-3cecef8c44fa ONLINE 0 0 0 0a014274-a583-11ec-9714-3cecef8c44fa ONLINE 1 4 0 0b3f1b68-a583-11ec-9714-3cecef8c44fa ONLINE 1 4 0 0ae27c9a-a583-11ec-9714-3cecef8c44fa ONLINE 3 4 0 56f3655d-2ed5-11ed-ab7a-3cecef8c44fa ONLINE 0 0 0 0a775747-a583-11ec-9714-3cecef8c44fa ONLINE 0 0 0 0dcc0444-a583-11ec-9714-3cecef8c44fa ONLINE 3 4 0 0a474b61-a583-11ec-9714-3cecef8c44fa ONLINE 3 4 0 0cf68994-a583-11ec-9714-3cecef8c44fa ONLINE 3 4 0 errors: No known data errors Drives for this pool are listed below: 0a38cf8b-a583-11ec-9714-3cecef8c44fa -> sdk2 09737924-a583-11ec-9714-3cecef8c44fa -> sdb2 d9fd11cf-9761-11ed-abfe-3cecef8c44fa -> sde2 0a014274-a583-11ec-9714-3cecef8c44fa -> sdf2 0b3f1b68-a583-11ec-9714-3cecef8c44fa -> sdh2 0ae27c9a-a583-11ec-9714-3cecef8c44fa -> sdd2 56f3655d-2ed5-11ed-ab7a-3cecef8c44fa -> sdm2 0a775747-a583-11ec-9714-3cecef8c44fa -> sdl2 0dcc0444-a583-11ec-9714-3cecef8c44fa -> sdc2 0a474b61-a583-11ec-9714-3cecef8c44fa -> sda2 0cf68994-a583-11ec-9714-3cecef8c44fa -> sdg2
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
And I hope yesterday was an april-fools joke.
Sorry, that wasn't me. Mine just set the flag to make ALL drives red and told you that "bits were flying off the platters" :smile:

I'd run a scrub and if you have no repaired bytes, run
Code:
zpool clear TrueNAS
to clear the errors, then keep an eye on them.
 

GrimmReaperNL

Explorer
Joined
Jan 24, 2022
Messages
58
So Kingston rep says (rep name removed)
06:51:14 PM [N-]
Is your system running for long period of time before it is switch off?
06:51:44 PM [Grimm]
its usually in a nas, so it would generally not get switched off at all
06:54:01 PM [N-]
Ok, so a possible reason for your drive to develop errors is the fact that it is not used as intended. The A400 SSD is designed for desktop and notebook computer workload and is not intended for "server" environment or usage 24/7.
and
06:57:04 PM [N-]
If there is no firmware update available it means that there is no firmware for your drive
06:57:07 PM [N-]
Firmware updates address specific controls of SSDs or improve functions that reportedly do not work as intended. Updates are only available for SSDs that would benefit from an update. This means that not the entire SSD range will receive the update, only certain revisions of that SSD model.
So don't buy cheap kingston drives for your truenas build.

Would changing the 'hdd standy' setting do anything, i mean, it's not an hdd. It's currently set to 'always on'.

I had a year run time with a single lexar nq100 ssd without any issues. >.<
 
Joined
Jun 15, 2022
Messages
674
In theory "no, it would not."

The bottom line is Kingston possibly did not write firmware that's up to the task. This is why members here are always suggesting people setting up a NAS buy server-grade equipment instead of consumer (preferably used, at a relatively inexpensive price--or at least less than new consumer-grade kit), not because they're snobs but because we really do want your system to work out.

There's an awesome video if you're interested on firmware/memory issues--it's long but explains a lot. Originally posted by @Ericloewe in an excellent post (<--totally worth reading his post) entitled "Zebras All the Way Down - Bryan Cantrill, Uptime 2017" --if you can watch it from the beginning, it's amazing. Briefly, it's about a BIOS setting that basically causes the system to not report memory errors because...there are so many memory errors they felt it best to hide them.
 

GrimmReaperNL

Explorer
Joined
Jan 24, 2022
Messages
58
I'd run a scrub and if you have no repaired bytes, run
Code:
zpool clear TrueNAS
to clear the errors, then keep an eye on them.
Scrub repaired 0B in 18:59:56 with 0 errors. So no clue what got into the pool that made it act up.
I'll stick with a poor april-fools joke from the nas itself.
 

GrimmReaperNL

Explorer
Joined
Jan 24, 2022
Messages
58
So while the boot-pool was degraded, the other drive shat the bed (again). So I just put my older single lexar ssd in and brought that up to date.
Don't use Kingston A400's for your boot-pool folks.

Now I'm unsure on whether to buy two new, different ssd's. Or another of this lexar drive.
Both the lexar and kingston's didn't have cache. should I look for drives with cache?
All these ssd's are so large. Truenas needs what? 16gb? 32?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Truenas needs what? 16gb? 32?
8GB but as you upgrade the more space for a few previous environments is good to have. I would buy a 32GB SSD at a minimum however buy a decent quality drive at the most reasonable price, do not factor in capacity. In other words, if the drive is 128GB and it's less expensive than the 32GB drive, then purchase the 128GB drive. Think of it as severe over-provisioning, not wasted space.
 
Top