<div class="gmail_quote">2009/10/16 Gary Dawes <span dir="ltr"><<a href="mailto:gary.dawes@gmail.com">gary.dawes@gmail.com</a>></span><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br><br><div class="gmail_quote">2009/9/29 Michael T. Dean <span dir="ltr"><<a href="mailto:mtdean@thirdcontact.com" target="_blank">mtdean@thirdcontact.com</a>></span><div><div></div><div class="h5"><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div>On 09/29/2009 10:05 AM, Nicolas Will wrote:<br>
</div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div>
On Sun, 2009-09-27 at 19:09 -0400, Michael T. Dean wrote:<br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
So, that's because there's a "prefix" index on the name column of the people table. Only the first 41 (of 128) characters are checked to determine if the values are unique. I'm actually about to commit a<br>
fix to the corruption-detection thanks to your noticing this. It will actually make it a more-stringent check, so it won't cause your DB to pass, so don't worry about updating to get it. :)<br>
</blockquote></div>
I've been hit by the duplicate issue when moving to .22.<div><br>
<br>
I cleaned my DB, but it was still the same.<br>
<br>
In the clean DB file, I changed the 41 to 51 and it restored properly,<br>
as it was checking more characters for uniqueness.<br>
<br>
Then .22 converted its DB version without any problem.<br>
<br>
Should I expect any ill result from this?<br>
</div></blockquote>
<br>
It may break future upgrades, depending on what happens in the future. Changing the index prefix length is /definitely/ not a fix as there is no possible way that the first 41 bytes of the UTF-8 data could be unique across all rows (as it would have to be for the data to exist in a properly-encoded UTF-8 in latin1 column MythTV database) but after condensing to actual UTF-8, the first 41 characters are not unique--as the first 41 characters encompass more data than the first 41 bytes, presuming there are /any/ multi-byte characters. Therefore, it means the table is definitely corrupt--with some latin1 data (which should not be there) and some UTF-8 data, such that there exist dups after conversion. Therefore, even after changing the prefix length, you still have corrupt data.<br>
<br>
A better approach is to get rid of the people. They're only used by the People Search (which, chances are, you've never used before), and will be repopulated quite quickly from new listings.<br>
<br>
If <a href="http://www.gossamer-threads.com/lists/mythtv/users/399395#399395" target="_blank">http://www.gossamer-threads.com/lists/mythtv/users/399395#399395</a> (after correcting my copy/paste errors as mentioned at <a href="http://www.gossamer-threads.com/lists/mythtv/users/399413#399413" target="_blank">http://www.gossamer-threads.com/lists/mythtv/users/399413#399413</a> ) where the grep is run against either the "uncorrupt" backup or the original backup doesn't work, or if after the database upgrade is performed the data doesn't look right, your best bet is to wait a few days. If you can't wait a few days and don't mind corrupting your data, it's your data, so it's your decision. But, it's just a few days...<br>
<br></blockquote></div></div><div>Hi<br><br>I have also been hit by the corruption issue, when trying to go to 0.22 RC1, currently running 0.21 with a database schema bversion 1215. I have been though the wiki article on how to repair the database several times now this morning, and have followed the various instructions by Mike to no avail.<br>
<br>the error I get (with verbose on) is <br><br>'mysql' --defaults-extra-file='/tmp/wPMWZ7oQgH' --host='localhost' --user='mythtv' 'mythconverg'<br>ERROR 1062 (23000) at line 1915: Duplicate entry 'JÃÂÃâ¬Ã»ÃÂÃâ¬Ã»rÃÂÃâ¬Ã»' for key 2<br>
<br>I have searched for this string in phpmyadmin, and also ran a grep against the initial backup and the repaired file and cannot find the string. Advice on how to track this and fix would be appreciated.<br><br>My Mysql server config is<br>
<br>mysql Ver 14.12 Distrib 5.0.32, for pc-linux-gnu (i486) using readline 5.2<br><br>Connection id: 55<br>Current database: mythconverg<br>Current user: mythtv@localhost<br>SSL: Not in use<br>
Current pager: stdout<br>Using outfile: ''<br>Using delimiter: ;<br>Server version: 5.0.32-Debian_7etch1 Debian etch distribution<br>Protocol version: 10<br>Connection: Localhost via UNIX socket<div class="im">
<br>
Server characterset: latin1<br>Db characterset: latin1<br>Client characterset: latin1<br>Conn. characterset: latin1<br></div>UNIX socket: /var/run/mysqld/mysqld.sock<br>Uptime: 48 min 29 sec<br>
<br>Thanks very much.<br><br>Gary<br></div></div>
</blockquote></div><br>Hmm, been spending far too much time on this today.<br><br>I have created a fresh install of Knopppmyth R5.5 with Mythtv 0.21 installed on a VM. Checked the mysql config which was OK. Created a clean database, and ran mythtv-setup to populate the tables, and restored the existing data OK.<br>
<br>I then backed up the data, and restored ok into the box with 0.22 installed after dropping the existing database. The restore worked, but failed on schema upgrade using mythtv-setup, with the "DB charset conversion failed" error.<br>
<br>I then tried all the above again, running the sed command to change the encoding. Again, the restore worked, but the schema upgrade again failed with the same error.<br><br>I then dropped the database again on the 0.21 box, and recreated. ran mythtv-setup to create a empty schema, and then performed a partial-restore into it. Checked for errors, and then backed that DB up. Once again a restore onto the 0.22 box with a fresh empty database was sucessful, but once again the schema upgrade in mythtv-setup failed with the same error.<br>
<br>Any ideas? I really really do not want to start a database from scratch as there is 6 years worth of data in there.<br><br>Thanks<br><br>Gary <br>