[mythtv-users] How is this pricing possible?

Alex Butcher mythlist at assursys.co.uk
Tue Mar 1 22:49:56 UTC 2011


On Tue, 1 Mar 2011, John Drescher wrote:

> On Tue, Mar 1, 2011 at 2:45 PM, Bobby Gill <bobbygill at rogers.com> wrote:
>> Thanks for the info! Just checking on my laptop, here is my smartctl
>> results: http://pastebin.com/mBNmv05r
>>
>> I am just confused as to whether I should be checking the results of the
>> column VALUE (4th to the left) or the very last column, RAW_VALUE? I am
>> setting up a RAID array in the coming weeks for the first time so knowing
>> what to pay attention to for HD health is sure timely, thanks for your help.
>>
>
> I look at the last column (although a second user in this thread said
> he ignores that and looks at the thresholds...). To me your results
> say that on 5 occasions your drive had trouble reading some sectors
> and in the process it reallocated 41 total sectors.

To the best of my knowledge, those reallocations will have occurred when
those problematic sectors were next written after the read error.

If current_pending_sector is anything other than 0, it means that read
errors have occurred and the sector(s) are marked for reallocation, but
writes to those sectors haven't happened, so the reallocations haven't
happened yet.

> This number is not that much but I would keep an eye on that.

Always worth keeping an eye on the reallocated_sector_ct attribute. Modern
drives (as of the beginning of 2009, at least) have "thousands" of spare
sectors, according to Seagate, so 41 is hardly any, even if the author of
libatasmart disagrees and uses a binary logorithmic function of the raw
count, which means that it shows a 3TB disc in failing state when only a
handful more sectors have failed than with a 80GB disc.

>  For the guy that looks at thresholds I believe it means 95% health on
> this item.

Naw, the cooked value is 100, which looks to be the nominal value for a
drive in perfect health. The raw value of 41 means we can suspect that it's
a little tiny bit worse than that, but nothing worth worrying about for now,
unless it changes rapidly, or continues to worsen steadily.

Far more alarming is the load_cycle_count attribute, which probably started
at 100 and is now 18 (after 829207 load cycles).  My guess is that the OP
has left the APM settings at default.  Personally, for laptop drives, I
write a little ACPID script which disables hard disc APM whenever the laptop
is on AC power. I've also found it necessary to do something similar on my
new MythTV box which uses 2TB WD Caviar Black drives.

> John

Best Regards,
Alex


More information about the mythtv-users mailing list