aboutprojectslinkslinks
 

b.l.o.g.

(blogs let others gawk)

May 5, 2014

“It’s just a phone”

Filed under: General,Perspective,Technology Rant — Tags: , , , , , — Bryan @ 6:00 am

Today’s news: Police are upset that people are tracking their stolen phones down and confronting thieves.

No. No it’s not just a phone. Ten years ago, for most people, it might have still been just a phone. The reality today is that the modern smart phone is closer to what the medical community might call a Mnemonic Aid.

Let’s step back for a moment for those of you who just don’t get it, but may have had the chance to play video games anytime since the early 90′s. Think of it this way. You’re 40 hours into your favorite video game and your save file is lost for some reason (corrupted file, lost memory card, etc…). If you’re like any kid in this situation, you about lost your mind. Some of you said “forget this” and simply walked away from the game. Some of you burned another month getting back to where you last were. Either way, it sucked.

Is that too recent an analogy? How about this. At least for you pre-90′s guys out there. Remember the “little black book” or tiny sheet of paper you kept folded up in your wallet with the scribbled on phone numbers and addresses for every person you ever met. What happened when that piece of paper went through the washing machine? You probably about cried and desperately tried to recover what data you could from those scraps of wet paper in every way possible. You usually failed, but either way, it sucked.

If neither of those two analogies work for you then I don’t know what to tell you. Maybe talk to your neighbor, friend, sibling for some perspective.

But if those analogies touched a nerve… magnify those feelings times a thousand? Times a million?

The modern smart phone is your save file/little black book on steroids. Consider, that for many people, the modern smart phone represents the personal narrative of someone’s life for the last year or two, or more if they have simply been transferring things forward like photos, contacts and who-knows what else. It’s the last photos of their father in the hospital before he passed away. It’s their daughter’s prom photo. It’s the heartfelt text from their now, ex telling them how much they loved them before everything went to hell. It’s the video they made two years ago with their friends in some city, celebrating some special event which was too crazy to post to Facebook, but it’s their favorite memory because they will never have all those wonderful friends together in one place again. For many it’s a device that keeps the proof of better times. For other’s it’s a quick reference to everything that is their life today.

It’s the bookmarks and sometimes phone numbers of every restaurant, club or store that someone keeps track of for quick reference. It’s personal notes, diaries, health trackers, fitness trackers, and in some cases all of a person’s favorite music.

It’s also the one device more so than even your home computer that has instant login access to every important website in your life. It might even store all your passwords as well.

If it was “just a phone”, people would be upset, they’d get over it. You can replace a thing. But it’s not “just a phone”. It’s not just a thing. It’s a personal narrative. It is a slice of life. It is the sum total of all of the most important save file data anyone can have in their life which they happen to carry with them everywhere they go.

Note to the police. People are not hunting down stolen phones. They are hunting down their save files and for some of them, that data is worth dying for.

February 9, 2014

Single point of failure (or how important is your data?)

So, this is a story I don’t tell too often but in light of some recent conversations about performing backups following the news about the Iron Mountain fire, I felt it would be insightful to share.

Back in 1997/1998 I learned a very hard lesson about data loss and the publication I Co-Edited called Game Zero magazine.

First the back story to explain how this situation ended up the way it did.

We started our web presence near the end of 1994 with a user account with a local company named Primenet who offered users the traditional array of features (WWW, POP mail, etc…). This worked out great except for a couple of problems. The first was that even though we had registered the domain gamezero.com for our site, Primenet’s server name resolution would sometimes flip visitor’s browser to primenet.com/team-0 URL while the person was traversing the site. This caused lots of people to create bookmarks and links to the site by the wrong URL (this comes into play later).

The second and later problem, although not a technical issue, was the cost associated with bandwidth for WWW visitors to the site. Towards the end of our time with Primenet we were hitting fees of a few hundred dollars a month for bandwidth from our 700,000+ page views a month. Fortunately we had designed our site incredibly light, so that helped keep costs low, but traffic and fees were climbing. Ultimately I set my sights to moving us to new “discount” hosting services which were becoming a thing in 1997. It was obvious we could save a significant amount of money by moving the site.

Also, for backups, we had our production computer which housed all the original and developing web content, including the active mirror of the website and remote publishing tools as well as our POP e-mail client for all business e-mail. Additionally, we kept backups of web content and e-mails on a collection of Zipdisks along with some limited content on a random assortment of floppies.

Remember, in 1997 hard drives where expensive! We’re talking a few hundred dollars for a 1GB drive. Our production PC had something like a 120MB drive, as I recall, so we had lots of data off loaded on the Zipdisks.

Also, about this time we also got word that the provider that had been handling our FTP based video repository was getting out of the hosting business. I decided it best to roll the video content into the new web hosting arrangement as the price would still be reasonable. We quickly migrated everything over, changed DNS entries, started sending out e-mails to people who had the old primenet.com addresses to please update their links, etc… Following the migration we only published a few major updates on the new server consisting of a couple of new videos and some articles which only existed on the website, our production system and our Zipdrive backups.

Then problems started…

  1. Traffic tanked on the new server.
  2. My crawling the web looking for bad links suddenly made me aware of just how bad the extent of the linking issue was and a significant amount of traffic was still going to the old Primenet URL. Fortunately right before we closed our Primenet account we setup a root page that linked to the proper URL along with a notice about the move which Primenet was kind enough to leave up at no cost, but it wasn’t a full site wide redirect though. Just the root pages.
  3. A few months into running on the new provider their servers went dark. When I contacted them to find out what happened, I reached a voicemail that informed me that they had filed bankruptcy and closed business. Done, gone… No contact and no way to recover any of the data from the web server.
  4. We now had a domain name that didn’t respond, our old provider’s server was pointing traffic to a that very same dead URL and since we had long since closed the Primenet account we had no ability to log in and change the redirect notice or make other modifications to redirect traffic someplace else.
  5. While scrambling to find new hosting, the hard drive on our production computer completely and utterly failed. 100% data loss.
  6. After getting a new hard drive I went to start rebuilding from our Zipdisks and to my horror none of them would read. We had now become a victim of what became to be known as the “click of death”. We lost some 20-30 Zipdisks all in total. Almost everything was gone except for a mirror of the website from before the migration to the new hosting and other random items scattered around. We also had a limited number of hard copies of e-mails and other documents.
  7. Lastly, while the Internet Archive now is a great way to recover website content. At this point in time it was still just getting started and their “Wayback Machine” had only just taken a partial snapshot of our sites (in both the US and Italy). Par for this story, the lost content were pages that had not been crawled yet except for the index pages for the missing videos. I could view the archive of the video pages… but the linked videos were too large at that time and were not mirrored.

Coming into this, I felt we had a pretty good data backup arrangement. But I learned the hard way that it wasn’t good enough. We lost all of the magazine’s e-mail archives including thousands of XBand correspondences as well as innumerable e-mails with publishers and developers. We lost two videos that had been produced and published. We lost a few articles and reviews. We also lost nearly all of the “in progress” content as well as a number of interviews.

At this point the staff agreed to stop spending money on the publication and formally end the magazine, especially since some of them were already making natural transitions into their careers and school. While we had stopped actively publishing at then end of 1996/start of 1997, if you were to ask me if there was a hard line for the the true end of the magazine, this was it.

Ultimately I did get the site back up as an archive which you can still read today. But, that’s another story.

The lesson of this story is to remember that there is no fool-proof backup situation. Only you can be responsible for you (or your company’s) data and you must always be aware that no matter what your best efforts are, data loss is always a possibility.

99.9% guarantees are great except for  that 0.1% chance, which is still a chance! and if someone is selling you a 100% guarantee let me know because I’ve got the title for this bridge in Brooklyn I might consider selling you for a deal.

What could I have different?

  1. Spread out our backups across more than one media type and one location. Simply having a duplicate set of Zipdisks and a second drive off site where there was no cross-mixing would have made a huge difference here.
  2. More frequent backups of critical business data such as e-mail.
  3. Retained the master account with the old service provider until we were sure traffic migration had been completed.
  4. Upon the first sign of Click of Death observed. I should have isolated both the problematic media and drive from use and looked for a second drive as the damage propagated once manifest.

Granted some of these would have likely added overhead cost, but the the question is would that cost balance against the value of the data lost? I don’t know. But since this happened I have been far more diligent in my data storage strategies where I now factor in the value and importance of the data with the breadth and depth of the backup plan and go with the best possible solution I can devise.

I have had only one significant data loss in the years since this happened. It was just last fall and I was doing some data re-organization as part of a desktop upgrade. A USB drive I was using for temporary storage fell over and become damaged in such a way that it would no longer read the disk. I then discovered that the data on the drive hadn’t been synchronized with the backup repository for a couple of months for some reason. Fortunately it was non-critical, personal data (downloaded drivers and install packages that I was able to re-download from the Internet). So all in all the only loss here was in my time. But it was reminder to me that even though I am way more careful than before, accidents still can happen.

February 6, 2014

Handheld gaming, mini reviews.

For the last few years, the majority of my gaming has been happening on handhelds. Between the Nintendo 3DS and Sony’s PSP and Vita systems, this really has been a phenomenal era for hand held gaming. Due to the length of this list though I’m going to break it into more than one post.

Highlights in no particular order:

Patapon 1 and 2 for the PSP/Vita (Patapon 3 was a real disappointment and the first of the series I didn’t bother solving)
I played this series so much that I regularly would joke that PSP actually stood for “Play Some Patapon”. It’s a brilliantly stylized game that combines the genres of Beat and Action Platform with RPG style elements. The music is catchy and the game play is intense to say the least.
Uncharted: Golden Abyss for Vita
A full blown, console quality Uncharted game for hand held. The story is fun, the graphics are top notch. Some of the mini-game elements were annoying but fortunately not required to progress the game. Play control was solid. The story is pretty stand alone and does not require you to have played the other games to follow, but it does add some extra dynamic between Drake and Sully. The game really sets the bar for hand held and I’m disappointed there have been so few games of this caliber for the platform.
Gravity Rush: for Vita
A visually stunning game to say the least. Even with it being free via the PS+ program for a good year, it’s still worth buying just to show some love to the developers. This is the first game in my opinion that truly pulls off frenetic 3-D air based hand-to-hand combat (don’t get me wrong, Zone of the Enders (PS2/PS3) and Omega Boost (PS1) still hold a fond place in my heart and certainly set the bar for their platforms… hmm… all Sony platform games… trend?). Play control took a little bit to wrap my head around but once I was into the game it became second nature. The manga/comic style approach to cinema/story segments was phenomenally executed. Seriously, it’s just an all around great game and a must have for any Vita owner.

(cont…)

January 23, 2014

An example of random not so random

Filed under: Grody Hacks — Tags: , , , , , , — Bryan @ 6:00 pm

So… this is a carry over from my last post. Now that I had a proof of concept confirming the viability of the data collection I’m doing, it was time to make something a little more automated on my Linux servers.

So, I’ve got about 20-30 servers that all ping a single point. The ping does four samples and the gives me the average of the results. I then take those results and do a wget post to another central server location that uses a Perl script to accept the post and drop it to a .log file (I don’t have a database option on this server, so this a quick disk write to a unique file is the brain dead option that gives me the least risk of concurrence issues and dealing with open files).

Now, I know I’m dealing with a small amount of servers but to really insure that I’m not polluting the results with the test, I decided to add to my Bash script that does the ping/wget, a couple of randomized sleep steps as seen below:

regex='= [^/]*/([0-9]+\.[0-9]+).*ms'

RANDOM=$$$
interval1=$(( $RANDOM % 49 + 10 ))
sleep $interval1

[[ $(/bin/ping -q -c 4 127.0.0.1) =~ $regex ]]

(note, I’m using localhost here, but in the live script it points to the actual server)

So, when this runs, it sleeps 10-59 seconds, runs a ping 4 times and calculates the average.

Then I do another sleep like the one above and send the results via wget to the remote collector.

I’ll also, note that the Bash script is run by cron on each of the servers and all of them pull time synchronization from the same source. So in theory, these scripts are launching at the exact same time on each of the servers.

The uninitiated would think intuitively that the chance of getting two wget’s posting at the exact same time, down to the second would be rare if not impossible with random numbers, but sure enough, by default, each run consistently will create multiple log drops that have identical time stamps for their log impressions.

Why? Well, random isn’t really random from one computer to the next unless you reset the seed value before you run it (and alot of people go to exceptional efforts to come up with highly convoluted methods to do just this). In this case, true random is not critical, so I’m reseeding Random with the PID of the process running the script. This changes each execution and the chance that the same PID will be issued at the same time on any given server is remote if unlikely, which is good enough for what I need here.

Now I’ve got staggered activity.

Cheers!

January 21, 2014

DOS magic (today’s grody hack)

Filed under: Grody Hacks — Tags: , , , , — Bryan @ 6:00 pm

So, I recently had 8000+ text files. Each file contained a single line of CSV text without a trailing new-line.

This 8000+ files also had subsets to them based on a unique word in the file name as well (eg: server1, serverdb etc…)

My challenge was to concatenate all of the files from each group into their own individually collected files while also adding line-feeds between each file’s contents in order to create the new single CSV file and pre-pend today’s date to the file name, which would give me a proper CSV file that would be easier to share, store and process by others.

First, I whip open my friendly Notepad application and create a header.csv text file that contains the column headers for the CSV.

Then using Notepad again, after some unnecessarily painful searching, I finally was able to cobble together this, which I then Select All > Copy, then Paste into my DOS prompt (CD’d to the folder with the files first):

type header.csv > %date:~10,4%%date:~4,2%%date:~7,2%-SERVER1.csv && for %f in (*SERVER1*.log) do @echo. >> %date:~10,4%%date:~4,2%%date:~7,2%-SERVER1.csv && type “%f” >> %date:~10,4%%date:~4,2%%date:~7,2%-SERVER1.csv
type header.csv > %date:~10,4%%date:~4,2%%date:~7,2%-SERVERDB.csv && for %f in (*SERVERDB*.log) do @echo. >> %date:~10,4%%date:~4,2%%date:~7,2%-SERVERDB.csv && type “%f” >> %date:~10,4%%date:~4,2%%date:~7,2%-SERVERDB.csv

(….repeat as needed… etc…)

  • for %f in … (runs a loop of the full command-line against the wildcard based on the number of files matching the wildcard)
  • @echo. (with the important, trailing “.” outputs the new-line)
  • %date:… (captures just a portion of the “date” value and echos it)
  • && (appends additional command to the line)
  • >> (appends the output of the line to the file)

Yeah, it’s crude, but I’ll take the win.

If I really wanted to be fancy, I’d try and capture the date from the source files and use that for the imprint on the destination, but I’m not that inspired. Sorry.

I’m posting this on the hopes that if someone spirals into this kind of situation like I did today, they’ll find this post and go “Yes! That’s exactly what I needed!”. Good luck.

I’ll file this under TIL about abusing “date” and “&&” in a new way, or maybe I only learned how much DOS I’ve forgotten. Hmm…

Older Posts »