[mythtv-users] best use of limited RAM

Marius Schrecker marius.schrecker at lyse.net
Wed Nov 22 14:30:17 UTC 2017


On Wednesday, November 22, 2017 15:24 CET, Stephen Worthington <stephen_agent at jsw.gen.nz> wrote:
 On Wed, 22 Nov 2017 14:46:31 +0100, you wrote:

>
>On Wednesday, November 22, 2017 14:35 CET, Mike Hodson <mystica at gmail.com> wrote:
> Dear Marius, What kind of SSD?  Sandforce, or other? Model would be most helpful. What kind of NAND? Single-level? Multi-Level? Three-level?(almost assuredly not 3-level due to size/age) Could you paste a smartctl -a /dev/sdX ? What are you attempting to cache? Reads or writes?  Reads, while potentially disruptive if you read the exact same sector enough times, are almost certainly mitigated in the wear-leveling action of the disk in question.  Writes never will 100% cache, unless you use something insane like XFS which never seems to write data until sync() is called, if enough ram is free.  Maybe theyve fixed that by now, but I know I've lost files I saved when the system (after MANY hours) subsequently hard-locked due to kernel weirdness with graphics drivers.. I've personally done a rather large amount of SSD usage over the past ..when was Sandforce 1 released? Many terabytes of writes per drive and not a single disk has died on me, not even a tiny old 60GB one. I've also worked
>for companies using consumer-grade (850Evo among others) SSDs in enterprise webhosting scenarios. They rarely die, and almost always give fair warning with SMART values before-hand.   Also, have you taken a look at The Tech Report's "SSD Endurance Experiment" ?  its final conclusion is that some disks can write PETABYTES before dying. http://techreport.com/review/27909/the-ssd-endurance-experiment-theyre-all-dead For MythTV use, the MySQL database would be the only major workload as long as all video goes to the spinner; its definitely not anywhere near as bad as shoving about 10-25 virtual private servers on each disk doing $entity-knows what for the course of a few years. In RAID5/6 no less. Periodic /var/log/messages and similar background OS daily workload is negligible.  As for Swap?  why would your system need swap? Disable it entirely and let things OOMkill themselves if something goes awry.  Or be like my desktop, and set 20GB of swap for a 16GB system (effectively 10GB for
>an 8GB system) to have room to hibernate, and potentially let some random process chew up all 10 gigs before OOMkilling itself, and bogging down the system for _minutes_ before the oomkiller responds properly.   
>
>I've only run into massive swap usage in 2 scenarios:1, if I allow processes like Chrome to continue to spawn new tabs like rabbits and never close them; sooner or later the swap finally fills, and things OOMkill. but this is _gradual_ swapping. 2, I attempt to compile something large with -j16 as Sabayon is by default setup to do, if you fail to tweak the out-of-the-box make.conf ... Ugh.  In almost all cases, just use the SSD as you would a normal disk; almost industry-wide the wear-leveling algorithms were rock-solid as of about 4 years ago. Hope this helps, and I would love to know the SMART data as well to make proper predictions. Mike  On Wed, Nov 22, 2017 at 7:41 AM, Marius Schrecker <marius.schrecker at lyse.net> wrote:Hi,
>
>  I am repurposing an old, box that was previously used as a combined mythtv backend/frontend, Logitech Media Server and nfs fileserver as a mythtv backend only.
>
>The system has a maximum of 8GB RAM, a quite well used 120GB SSD (no signs of failure yet) and I just replaced the 3TB media storage spindle drive with a new 4TB unit.
>
>My main concern is offloading the SSD as much as possible to prolong its life.
>
>I have started with a base install of xubuntu, set up fstab wioth tempfs for /tmp, /var/tmp and /var/spool/mqueue and installed log2ram to take care of /var/log. Relatime is set by default on all partitions.
>
>The original plan was to increase the RAM to 16GB or even 32GB for maximum disk caching, but I see that my mobo doesn't support that, even if the physical dimm modules are available.
>
>  Does the group have any suggestions for tweaking to make my 8GB go as far as possible to ghelp minimise disk writes?
>
>I am in two minds about creating a swapfile on the spindle drive, maybe with zram and keeping swappiness relatively high. Would that encourage the system to increase the amount of RAM used for disk caching?
>
>BR.
>
>--Marius--
>  Thank you very much Mike. That was most helpful.
>
>I can't remember the SSD disk right now but will take a look and read out from smartctl this evening and let you know. I did a quick smartctl readout yesterday, which reported no errors.
>
>I was intending to do a firmware update on the disk, before installing the system, but couldn't find a firmware, so just left it as is.
>
>Regarding swap area, I'm not expectuing any swapping to take place in normal usage, but thought that I might be able to force more agressive write caching (and fewer writes), by using zswap to force silly amounts of available ram (compared with intended usage).
>
>  Anyway, the stats on SSD's look promising.
>
>BR.
>
>--Marius--

I do get swapping with 8 Gibytes of RAM - I normally have parts of
mythfrontend swapped out, and can get some of mythbackend as well now
since upgrading to v29. Here is a list of my current top swap file
users:

Overall swap used: 688132 kB
========================================
kB pid name
========================================
297040 3879 mythfrontend.re
102712 3751 mythbackend
37688 3279 mysqld
32776 2959 Xorg
22280 3694 xfdesktop
14048 3839 smart-notifier
9792 2170 named
9752 1484 snapd
8752 2187 python3
7736 2818 snmpd
6061 3342 (sd-pam)
6000 4110 console-kit-dae

I think most of this is caused by my massive list of recordings:

MariaDB [mythconverg]> select count(*) from recorded;
+----------+
| count(*) |
+----------+
| 24770 |
+----------+
1 row in set (0.00 sec)

and the consequent massive use of thumbnails and cache.

It does not seem to cause any particular problems though - my guess is
that the swapped out parts of mythfrontend and mythbackend are bits
that are not normally used on my system and hence they never get
swapped back in again.

I have a Samsung 950 Evo Pro NVMe M.2 265 Gbyte SSD. Here is its
SMART data:

smartctl 6.5 2016-05-07 r4318 [x86_64-linux-4.4.0-101-generic] (local
build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke,
www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Number: Samsung SSD 950 PRO 256GB
Serial Number: S2GLNCAGA02568R
Firmware Version: 1B0QBXX7
PCI Vendor/Subsystem ID: 0x144d
IEEE OUI Identifier: 0x002538
Controller ID: 1
Number of Namespaces: 1
Namespace 1 Size/Capacity: 256,060,514,304 [256 GB]
Namespace 1 Utilization: 184,960,925,696 [184 GB]
Namespace 1 Formatted LBA Size: 512
Local Time is: Thu Nov 23 02:56:35 2017 NZDT
Firmware Updates (0x06): 3 Slots
Optional Admin Commands (0x0007): Security Format Frmw_DL
Optional NVM Commands (0x001f): Comp Wr_Unc DS_Mngmt Wr_Zero
Sav/Sel_Feat
Maximum Data Transfer Size: 32 Pages

Supported Power States
St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat
0 + 6.50W - - 0 0 0 0 5 5
1 + 5.80W - - 1 1 1 1 30 30
2 + 3.60W - - 2 2 2 2 100 100
3 - 0.0700W - - 3 3 3 3 500 5000
4 - 0.0050W - - 4 4 4 4 2000 22000

Supported LBA Sizes (NSID 0x1)
Id Fmt Data Metadt Rel_Perf
0 + 512 0 0

=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02, NSID 0x1)
Critical Warning: 0x00
Temperature: 52 Celsius
Available Spare: 100%
Available Spare Threshold: 10%
Percentage Used: 1%
Data Units Read: 53,360,107 [27.3 TB]
Data Units Written: 39,461,466 [20.2 TB]
Host Read Commands: 912,642,687
Host Write Commands: 536,929,133
Controller Busy Time: 1,714
Power Cycles: 206
Power On Hours: 12,620
Unsafe Shutdowns: 110
Media and Data Integrity Errors: 0
Error Information Log Entries: 2,889

Error Information (NVMe Log 0x01, max 64 entries)
Num ErrCount SQId CmdId Status PELoc LBA NSID VS
0 2889 4 - 0x0000 0x000 0 0 -
1 2888 0 0x0075 0x4212 0x000 0 255 -
2 2887 0 0x0074 0x4212 0x000 0 255 -
3 2886 0 0x006e 0x4212 0x000 0 255 -
4 2885 0 0x006d 0x4212 0x000 0 255 -
5 2884 0 0x0029 0x4212 0x000 0 255 -
6 2883 0 0x0028 0x4212 0x000 0 255 -
7 2882 0 0x0024 0x4212 0x000 0 255 -
8 2881 0 0x0023 0x4212 0x000 0 255 -
9 2880 0 0x001f 0x4212 0x000 0 255 -
10 2879 0 0x001e 0x4212 0x000 0 255 -
11 2878 0 0x0016 0x4212 0x000 0 255 -
12 2877 0 0x0015 0x4212 0x000 0 255 -
13 2876 0 0x0011 0x4212 0x000 0 255 -
14 2875 0 0x0010 0x4212 0x000 0 255 -
15 2874 0 0x000c 0x4212 0x000 0 255 -
... (48 entries not shown)

The warranty on these drives is "5 Year Limited Warranty or 200TBW"
(terabytes written). I have 12,620 hours = 1.44 years of operation
now, and have used 20.2 TBW in that time, so around 1/10th of the
warranted lifetime. So I am in no danger of wearing it out any time
soon. This is in spite of my having a 10 Gibyte swap partition on the
SSD. I did take the precaution of having 25 Gibytes of unallocated
space for the drive to use as over provisioning.

When I assessed my options for an SSD, I decided that the figures on
the ones available at that time were such that I was not likely to
wear one out in its useful lifetime, and that I should to be able to
use it freely for anything that needed the speed of an SSD, rather
than worrying about wear. And having done that, after 1.44 years I am
happy with going that way.
_______________________________________________
mythtv-users mailing list
mythtv-users at mythtv.org
http://lists.mythtv.org/mailman/listinfo/mythtv-users
http://wiki.mythtv.org/Mailing_List_etiquette
MythTV Forums: https://forum.mythtv.org

It will be interesting to compare your smart data with my, much older drive, which I just remembered is a Kingston HyperX (Sandforce??)

I'll provide more info in a couple of hours.

BR.

--Marius--

 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mythtv.org/pipermail/mythtv-users/attachments/20171122/94dcc69a/attachment.html>


More information about the mythtv-users mailing list