ColdFusion Muse

It's "Retired" Jim!

In another chapter of "The Cloud Never Crashes", I woke up Sunday to one of my AWS instances that was 'crashed' with a notice of "Amazon EC2 Instance scheduled for retirement". Retirement? What does that mean? I went to check my email and realized that the "retired" instance was the email server. Doh! It took me a little while to figure out what they meant. It means this "An instance is scheduled to be retired when AWS detects irreparable failure of the underlying hardware hosting the instance." This serves as a good reminder that the cloud is really someone else's server.

In theory this is an easy fix. The instructions at Amazon claims that stopping and restarting the instance will launch it on new hardware. In practice I could not get the instance to stop. This is where having physical hardware and a power cord to pull would have been nice. Failing to get the instance to stop I could not detach the EBS root volume. Even force detaching the EBS root volume didn't work. This is where daily snapshots of EBS volumes comes in handy. I was able to launch a new EC2 instance and then convert the last snapshot to an EBS volume and attach that to the new EC2 instance. Then I moved the elastic IP from the "Retired" instance to the new instance and hit "start'. Full recovery!

Now I'm left with a hanging EC2 instance that is still "Stopping" and an EBS volume that I cannot use, detach, delete etc. I tried reissuing stop commands a couple times. Eventually I noticed a "Force Stop" option. I do not remember seeing this on earlier attempts. I do not know if this shows up after the first failed stopped attempt or after several. I'm not sure, but I think that sends a trained monkey into the datacenter to pull the power cord. In any case it worked. This let me detach my EBS volume. From there was was able to stop the new instance, detach the EBS volume and attach my original EBS root volume. Now I have full recovery and I was able to clean up the loose ends.

Amazon Web Service has given us a new euphemism. Retired means It's Dead Jim!

CF Webtools is an Amazon Web Services Partner. Our Operations Group can build, manage, and maintain your AWS services. We also handle migration of physical servers into AWS Cloud services. If you are looking for professional AWS management our operations group is standing by 24/7 - give us a call at 402-408-3733, or send a note to operations at

Authorize.NET Temporarily Ending TLS 1.1 and TLS 1.0 Support

At CF Webtools we have been preparing for this inevitable day for the past few years. We've been upgrading our clients servers and services to handle TLS 1.2 calls to Authorize.Net and other third party processors for a while now. Recently Authorize.Net announced a "Temporary Disablement of TLS 1.0/1.1" for "a few hours on January 30, 2018 and then again on February 8, 2018." This is in preparation for the final disablement of TLS1.0/1.1 on February 28, 2018.

As you may be aware, new PCI DSS requirements state that all payment systems must disable earlier versions of TLS protocols. These older protocols, TLS 1.0 and TLS 1.1, are highly vulnerable to security breaches and will be disabled by Authorize.Net on February 28, 2018.

To help you identify if you're using one of the older TLS protocols, Authorize.Net will temporarily disable those connections for a few hours on January 30, 2018 and then again on February 8, 2018.

Based on the API connection you are using, on either one of these two days you will not be able to process transactions for a short period of time. If you don't know which API you're using, your solution provider or development partner might be a good resource to help identify it. This disablement will occur on one of the following dates and time:

  • Akamai-enabled API connections will occur on January 30, 2018 between 9:00 AM and 1:00 PM Pacific time.
  • All other API connections will occur on February 8, 2018 between 11:00 AM and 1:00 PM Pacific time.

Merchants using TLS 1.2 by these dates will not be affected by the temporary disablement. We strongly recommend that connections still using TLS 1.0 or TLS 1.1 be updated as soon as possible to the stronger TLS 1.2 protocol.

This means that if you are using older methods to make calls to Authorize.Net that are not capable of making TLS 1.2 connections then you will NOT be able to process credit card transactions.

This affects ALL ColdFusion versions 9.0.2 and older! This also affects ColdFusion 10 Update 17 and older. If your server is running any of these older versions of ColdFusion and your server is processing credit cards with Authorize.Net then this advisory applies to your server.

CF Webtools has been successfully mitigating this issue for clients servers for the past couple years and we are very experienced in resolving these security related issues. In a previous blog post I tested which SSL/TLS levels were supported by various ColdFusion versions on various Java versions and produced an easy to read chart.

If your ColdFusion server is affected by this or if you do not know if your ColdFusion server is affected by this then please contact us (much) sooner than later. Our operations group is standing by 24/7 - give us a call at 402-408-3733, or send a note to operations at

Can't Save applicationHost.config?

In an email group I am in there was a problem brought to the floor regarding removing the wildcard mapping from the applicationHost.config file in IIS 7.5. This file resides in the system32/inetsrv/config directory of your windows server and it contains the defaults for all sites. The defaults can be overridden using a web.config file in the root directory of an individual site. The user in question was trying to manually uninstall the connectors for ColdFusion 9. After going through all the steps there was one thing he could not seem to get rid of - a wildcard mapping for the jrun_iis6.dll in the handler mappings of IIS. He went down the list of things he had done and we all agreed that each file and step was complete - including removing the "global" mapping for this handler from applicationHost.config. While we were puzzling super guru and CFCelebrity Charlie Arehart (he of CF 411 fame) reminded us of a "gotcha" that occurs with files in the system32 directory and subdirectories.

The scoop is, if you open the file using a 32bit editor (say Notepad++) from system32/inetsrv/Config Windows does the old switcheroo and opens the identical (but unused) file in the SysWoW64/inetsrv/Config directory. You are editing the wrong file and you don't even know it (wow! ... or maybe I should say Wow64!). As an aside, this poorly named directory, SysWoW64, stands for "Windows on Windows 64 bit" - meaning files in this directory are "old 32 bit" windows files running on this "64 bit" version of windows. Not only is that unclear, it makes the continued use of system32 confusing. Why not just use system32 and system64? Ah well, I digress.

The Fix

The long and short of it is, to make sure you are editing the correct file, use notepad (the 64bit version ships with the OS). To avoid mistakes, open notepad first as Administrator, then navigate to and open the file in the /system32 directory. If you don't you will pull out your hair trying to figure out why your changes won't take. For more information check out this blog post on the issue by Mike Ratcliffe.

As always Muse readers thanks for you patronage. Especially thanks to those of you who have begun to pass referrals to folks who may need us - we are riding high because of you.

Fun and Games With Googlebot

When planning for scalability one of the things that is sometimes left out is the impact of indexing bots on your site. If you have a news or ecommerce site that is constantly changing, you definitely want bots to be indexing your site. How else are the latest and greatest products or stories going to show up in organic searches after all? But you also want bots to be well behaved. It would great if you could greet the bots at the door and say "Hey... it's 2:00am, not much going on so index to your heart's content." Or, "Whoa there fella - do you have a reservation? This is Cyber Monday and I'm afraid all our seats are full for paying customers. Can you come back in 12 hours?" But that sort of smart interaction is sadly not in the cards. Some bots have defined rules, some do not. Some honor things you put in the robots.txt file others do not. So here are some tips that might save you some time.


FileZilla Vs. FileZilla - Battle of the Century

Here's a problem that comes under the heading of the right hand being unaware of what the left hand is doing. Seriously the right hand is like sitting back, relaxing, maybe pushing the remote and the left hand playing the drums, waving at passers-by and tap-dancing wearing a little tuxedo. The right hand says, "Hey.. what in the ham sandwich is going on over there?" And the left hand says... Eh... I'd say that is as far as I want to go with the whole left-hand-right-hand thing.

The Problem

From time to time I end up troubleshooting an FTP connection. Like most system admins I hate FTP with a capital ick. Insecure, clunky, fault intolerant... It's like arriving at the Oscars in a VW Bus. It gets you there but there has to be a better transport than this! Where we need to support FTP (and always through a VPN please - do the rest of us a favor!), we use a product called FileZilla FTP Server. It's a great product and you can be up and running in about 5 minutes on just about any windows platform. Simply add users, folders, IP restrictions etc. and you are off and running.

The only problem is that occasionally people will call and simply can't get connected. The server is able to recognize them and verify their credentials but when it issues its first server side command (list I think) it times out and drops the connection. After a while I realized that the folks who were experiencing this problem were people running the FileZilla client. That's right - the FileZilla client has trouble connecting to the FileZilla server. If I asked them to use a different client the problem went away.

The Solution

A few weeks ago I finally figured out the solution based on a comment by super-genius-guru Wil Genovese of who (I'm thrilled to say) works for the Muse and does miraculous things almost every day for our company and staff. He mentioned the problem with FileZilla and UTF-8 encoding. Some experimentation helped me determine an actual fix. Apparently the FileZilla client is passing the string UTF-8 for encoding (which looks ok to me) and the FileZilla server is expecting "UTF8". Fortunately the site manager has a way of specifying a "custom encoding" string. Click on a site then click on the "charset" properties tab. One of the choices is "custom character set". Choose it and enter UTF8 (without the dash). You should be able to connect fine after that.

Note: this problem doesn't exist with every version of client and server combination.

Fun with SMTP Relay

This is a post about solving a particular problem with SMTP relay that involves mass emails. Whenever I write a post on this topic there are 2 things that my savvy readers always feel compelled to tell me:

  • "Hey Muse, make sure you are not sending SPAM" - Thanks for the advice. The Muse takes a dim view of SPAM. Like everyone else I'm tired of being told I've won the lottery, have friends in Nigeria, and need to be more concerned about my size. The emails in this case are not spam - but if you are tempted to make that comment I appreciate that you have the Muse' back.
  • "Hey Muse, you are crazy to do this yourself - Thanks for that as well. And please don't hesitate to tell me (again) about the various services that are out there - all of which are better equipped technically, mentally, physically, and ecumenically to handle my email so that I need not be an expert on the topic. I always appreciate that input. The only thing I like better than leaving money on the table is transferring it to another vendor. In fact it's an axiom of business to never do anything for money that you can pay someone else to do for you. Next week I'm writing about server troubleshooting and that will give you an additional opportunity to tell me (again) about how that no one needs to worry about that anymore either because the cloud fairies do it all magically.
Now that the preliminary caveats are out of the way, let's talk environment, then problem then solution.


Dev Tip 101 - the HOSTS File

Fair warning - this is a pretty "101" post so it might be a tad ho-hum for some of you. It's surprising to me how many developers I meet who stare at me blankly when I suggest that the use their HOSTS file for one purpose or another. The HOSTS file has been around since the first networks - although it's gone by a few names over the years. In its current version HOSTS has been around largely unchanged since ARPANET and is in fact the predecessor to DNS (which has reached a venerable age in its own right). As a web developer, learning some easy fundamentals about the use of this file is a practical and fundamental skill - so let's explore it a bit shall we?


Arcane Networking Tip Number 702 - Non-Static Mapped IPs

This falls along the lines of one of those tips that matters only to sys admins, firewall managers or network engineers. So if you aren't a networking geek (or don't aspire to become one) you can skip this tip. Here's the skinny.

When setting up a windows server I like to use an "internal only" IP address - one that is not statically NAT'ted to anything - as the source IP address. In most cases this means the IP address presented when making outgoing requests is the external address of the firewall instead of the "real static" ip. If you don't know what I mean by source IP, remote into your server and use a brower to go to what is my ip. Whatever it gives you back is your "source" IP address - the IP presented by outgoing requests. In fact if you check your source IP from your desktop in a typical corporate office and then go to a neighbor's computer and check it there it there you will likely see the same IP address. This is because most desktops sit behind a firewall and the firewall has an assigned IP address that it presents as the "source" IP for most traffic. And that "external firewall address" is also the one I often choose to use for outgoing traffic from a server.

Ok, so why is that a problem? Well a server is a little different. In most cases it will have one or more IPs that are "statically mapped" to its own live internet ips. For example, let's say the DNS record for is pointed to "Inside" the network the server actually has an IP of When web traffic hits the firewall for the "external" address ( it looks at its translation table and knows that the "inside" address that "equals" is actually - and then it checks to see if the traffic is allowed (that's the "firewall" function of a firewall) and forwards the traffic to port 80 on That's "network address translation" to a "statically mapped ip address" (whew!!). Ok, take a drink of water - the dizziness will pass momentarily.

Now for a variety of reasons I often don't want the IP address that the server presents when making outgoing connections to be "statically mapped". Instead I often prefer it to present the outside IP address of the firewall (as a sort of a generic proxy for my whole network). That used to be pretty easy. In Windows server 2000 and 2003 that was easy. I would just make sure that the first IP address I added (the one that you actually "see" in the little network IP properties window before you click on "advanced" - was a non-statically mapped IP. All outgoing traffic would "choose" this first IP by default and Voila! I have the results I was looking for. Then I could just add my other "statically mapped" IPs in the advanced tab and move on.

With Windows 2008r2 however this source IP address can switch to one of the other IPs in the pool. So even though I added my non-static IP first eventually my server might switch to using the statically mapped IP. This is probably only an annoyance for me. But if you are one of the tiny minority of people who geek out over such things here is the solution.

The Fix

In order to get the behavior you want you start out the same. Add your non-static IP as the first IP address per usual. Then instead of adding additional IPs using the "advanced" tab, open a command line and use netsh to add them with the netsh command and the "skipassource=true" flag. It's that "skipassource" flag that does the magic. Here's the syntax for you.

netsh int ipv4 add address network_1 skipassource=true

One note - the label network_1 in the syntax above is the "name" of the adapter or "network" you are adding to. You can find this in network properties. By default it is "Local Area Connection" but I always rename it to something without spaces so I don't have to do too much head scratching (with quotes? without quotes? single quotes?). If you add your subsequent IPs like this from the command line using the skipassourceflag then your "non-static" IP will always be the default preferred IP for outgoing traffic. Hope this is of use to someone. Happy coding.

Host Files and the Resolver Cache

This is something I assume everyone knows sometimes - but it's actually a networking thing. Surprisingly, quite a few very bright developers get that deer-in-the-headlights look when dealing with networking issues. The question is, "How do I test a domain without making it live." This is an important question. For example you might have code that is domain specific. That happens on those sites with a "shared codebase" or with Ajax or whatever. In addition, you might be developing on your local machine (many - perhaps most folks do) and want to set it up so that the domain points to your local machine. You might have a site about to "go live" and you want to thoroughly test it prior to changing DNS. So there are many reasons you may need to do this. Here's a quick tutorial on how to use the hosts file to make this happen. Note - the examples are for Windows, but MAC an Linux also have hosts files so the principle still applies.


The Muse Goes "Deeper" on the Mac

I will probably get hammered for this, but it really annoys me that the "finder" does not expose everything on the machine to my prying eyes. I don't need an operating system that obscures anything from me. I want to poke into every single little nook and cranny of the file system, device drivers, logs, config files, run levels, permissions.... I'm not interested in a "friendly" interface that pats me on my hand and shows me just what I need to know. I want to dig into the whole banana and find out everything I can.

So I spent an hour looking for something like window's "Folder Options" or 08r2's "god mode" console - where I could enable file extensions and hidden and system folders. Finally My good friend and colleague Wil Genovese told me to download "Deeper" - an apple utility whose sole purpose seems to be to enable various things that are "off" by default on a Mac. I can now see the whole file system (thanks Wil!). On to the next challenge!

The Muse and the Mac

I bought a new mac yesterday and I'm diving in trying to figure things out. The Muse knows his way around every flavor of Windows going all the way back to Windows 95 and up to every version of the server product. I cut my teeth in IT as an MS network engineer. But I've seen and fiddled with Macs before. I have added hardware (SCSI drives, controllers, RAM etc.), configured print drivers, and connected to shares on the network - all in the way of support for some of my design buddies like Erin Osterberg (a beautiful and wonderful video editor working for my good friend Rob Helling at Sonburst Communications).

But I've never actually been a Mac user. Mostly I try to stick to what works and having a modest aptitude for PC's and servers I found my niche there. It's also hard to stomach the price. It seems with the Mac you are paying twice as much for the same hardware that is in a PC... except you get that thar fancy brushed aluminum casing and a shiny mouse and brushed aluminum keyboard that looks to be made for a child.

Still, I have a need to work with some I-phone apps, so I need a Mac to run Xcode. I bought an Imac with a giant screen, set that bad boy up, rubbed my hands together and started in.

From this point forward I suspect that some of my readers will likely treat me with a rueful chuckle and some ribbing. It may be painfully obvious in the next few paragraphs how clueless I am. Anyway - here goes. Mac's reputation for being easy to use is well earned. I did not have any trouble getting my network configured and figuring out all of the personal preferences. I managed to install Firefox (and firebug), chrome and eclipse. I found the "software updates" and ran them, nicely updating a good many things on my machine. I managed to register the machine in active directory and add my domain permissions to my keychain. I even figured out how to remove all the foofy stuff I'll never use from the dock (Iphoto, Itunes, Imovie, Ichat, Ical, IstartEverythingWithI). So far so good.

Install How Exactly?

So my first "issue" is with something weird happening with installs. I installed FireFox then dragged it to the applications folder and (I think) to the dock. But on my desktop there is an item that says FireFox with an icon like a drive. There's another one that says "chrome" after that install. As far as I can see I have Chrome and FireFox in the dock. When I try to delete them Mac asks me if I wish to "eject" them. Guru Toby Tremane tells me that Mac files are downloaded as ".dmg" files - disk image files. How they got on the desktop I'll never know.

Network Follies

The muse is all about work at the office, so my next task was network resources. I was feeling good about it too. I managed to get connected to my network printer ok and I've mapped server shares before on a Mac. I have about 10 or 12 shares to mount representing various projects, shared doc storage and servers that I keep track of and visit from time to time. This turned out to be an exercise in frustration. As I see it at this point (and this may change when I find out the myriad of things I don't yet know) Mac's don't really like to play nice as windows network clients. For one thing there's no drive letter.

I knew this and expected it of course, but what I did NOT expect was the complete inability to go to the file explorer, enter a UNC path and see the share content. Surely there is something on a Mac that allows me to simply browse UNC paths ad hoc without the necessity of going through the whole "connect to a server" dance. And please, if you are going to clue me in, don't forget to tell me the shortcut keys. I go hours without touching the mouse on a PC, but the Mac seems to want me to "drag" things around to make use of them.

Once I did get a drive "mapped" (sort of) using the "connect to a network server" widget, I could only see it in the "finder" under "Devices" (not "drives" or "shares" or "network resources" but "devices"?). Furthermore I could not seem to rename the "device" once I had established the linkage. This was a problem to say the least because I had shares of the same name. For example, I mapped to ServerA with a UNC of "\\serverA\webs" and ServerB with a UNC of "\\serverb\webs". In my "devices" I now saw 2 devices both of which were named "webs". There was no way to simply rename them so I could tell them apart either. I did find I could make a sym link (an alias) to these drive mappings and rename that link. I did that on the desktop and that got me a little further.

Now, I went to open some projects (in eclipse) at these locations and I had a terrible time finding them from the browse application. Some command line searching (thank god the Linux command line is still operable) and it turns out these mapped drives are actually linked to the "volumes" folder. Well of course they are – Linux under the hood remember Mark!!

Remember my "webs" example? Closer examination showed that I had a /volumes/webs folder (mapped to serverA) and a /volumes/webs-1 folder (mapped to serverB). But in the chooser I had 2 "devices" both of which simply said "webs". And here's the kicker - clicking on either one of the devices opened only /volumes/webs-1. In other words, the short cuts in chooser were crossed up and pointed to the same share. If I navigated to the volumes folder on my own and clicked on one or the other I could get the content I was looking for, but neither the chooser nor the aliases on the desktop seemed capable of getting me to both locations.

First Take

I'm impressed with the speed and the aesthetics of the system. I suspect it will take me a week or two to really feel comfortable. Next I have to install the IOS SDK for Iphone development, Skype, Photoshop, Parallels and a few other widgets to make it more usable. The screen is also impressive. Chrome and FF both work quite well. Eclipse seems to load and run adequately. I am also seeing why some people (like super genius and senior CF Webtools ColdFusion and Java developer Guy Rish) prefer a really giant screen to several smaller ones. I have 3 21 inch monitors and I thought that was pretty grand - but that 27 inch monitor really makes a difference with Eclipse. I can get code, debugging, log tailing and file explorer on the screen without sacrificing a clear view of any of them.

Final Caution

As you can tell from this post, I'm not afraid to put myself out there. I'm trying something new and I want to share, both personally and professionally, my take from the experience. I welcome comments to my blog as all my readers know. Indeed, some of the best content is often found in the comments. So with that caveat, I want to say that this is not the time for the old Mac vs. Windows debate. If you wish to flame and draw out that argument I can assure you that your comments won't last here. Please keep the discourse civil. If you have tips about how to help an old windows hand get the hang of a Mac, that would be splendid. If you want to address any of my specific points in this post - have at it. But if you only wish to jump in and start a holy war, please refrain. I'm sure there is plenty for us to learn without resorting to useless and trivial arguments. Ok... now that wasn't so bad was it Muse readers? Don't worry - the Novocain wears off in about 90 minutes :)

IIS 7 and the Web.config File

If you are new to IIS 7 you may not know about the Web.config file. This file acquires its initial properties from the global settings that you set at the server level (as opposed to the site level). If you make certain changes to the global settings (like adding a default doc for example) then a new web.config file is automatically created and put in the root of each site you add. Or possibly it's created when you fiddle with the site specific settings and deviate from the global settings. I'm not clear on when it is and is not created. But you can of course create one for yourself. The format looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<clear />
<add value="index.cfm" />
<add value="index.html" />
<add value="Default.htm" />
<add value="Default.asp" />
<add value="index.htm" />
<add value="iisstart.htm" />
<add value="default.aspx" />
<directoryBrowse enabled="false" />

This one simply specifies a few things for this site - the list of potential default docs and whether directory browsing is enabled or not.

An important note (and something I just ran into today) is that this file is pretty specific to the server you are on. It's not a great idea to put this file in your source code repository and have it deployed for example. In our case we deployed the production file to staging. The production file had a specific line in it from the production web server that implemented DotNetDefender (a nice URL scoping filter that helps weed out DOS, buffer overruns and other pernicious attacks). Our dev site (which is all internal to our network) doesn't have this filter installed. When the web.config file was deployed it resulted in our requests all returning 404 errors. It took about 20 minutes of head scratching before I figured out what was going on. Imagine how panicked we would be if we had deployed a dev web.config file to production with the same result (yikes!).

Anyway, like many site specific files (ini files, sometimes Application.cfm or .cfc files etc) you should carefully consider whether you want this particular file to become a part of the "official" code base.

Finally, there are many things you can do with the web.config file - much like the venerable httpd.conf file. You can add rules for mod_rewrite, add specific redirects, control permissions for folders etc. It's a very versatile new tool in the IIS arsenal. And yes Wil, you can manipulate this and all other IIS properties from the command line. Indeed, with Win08r2 Core you don't even need a desktop to be running at all. How does that grab you?

Dynamic Compression on ColdFusion 9 and IIS7

Maybe your already know that web servers can compress outgoing content. Compressed content arrives at the browser which decompresses it and is able to render it. This is all generally seamless to the user and results in a more effective use of bandwidth. Now, compressing static files (like .html files) is a no brainer for web servers. They simply pre-compress the files and store them in a file cache somewhere. When the original file is called for the web server serves up the compressed file instead.

Dynamic files are more problematic. There's no correlation between the file name and the buffered output of a ColdFusion page for example. Consider search results. One user might receive 10 results and another user might receive 10 completely different results. Still another user might receive 100 results. How is the web server supposed to compress that data? Like your app server it does it "on the fly". It waits for ColdFusion to return the response buffer, compresses the file in memory (as I understand it) and then outputs the buffer to the browser. At least that's the way it works in theory. In practice you might find that ColdFusion 9 and IIS 7 don't quite have this figured out yet.

Before I give you the blow-by-blow (and thankfully a solution) I want to make it clear that this problem and solution come to me by way of my good friend and colleague Vlad Friedman of Edgeweb Hosting. EdgeWeb consistently receives the highest possible reviews from its customers and Vlad is one of the brightest folks I know in our corner of the IT world. Now let's talk about our little problem shall we?


The Muse Visits EdgeWeb Hosting

On Monday and Tuesday of this week I was privilege to spend some time in Baltimore Maryland at the downtown location of EdgeWeb Hosting (EWH) - a hosting and data center services company owned and managed by Vlad Friedman. EWH specializes in ColdFusion hosting (although they have many other services by now). I've known Vlad for years through some mutual customers and through an email list on which we are both active participants, but I had never met him in person. Since I was doing some "emergency consulting" for a mutual customer I needed spend a day or two on site at EWH. Vlad was kind enough to show me around his data center and give me the "inside scoop" on the data center business.

The EdgeWeb data center is in a massive facility in the heart of Baltimore. EWH has redundant everything - including redundant power from separate substations, 4 way redundant UPS, and impressive cooling. The entire infrastructure has been recently designed from the ground up with the care and planning of a master craftsman. Vlad is understandably proud of the center and the staff he has assembled. I don't remember all the things he showed me but his networking topology uses the latest and greatest adaptive routing and his security setup (intrusion detection, audit control and the like) is state-of-the-art. I have visited a fair number of data centers but I was really impressed.

I was able to meet some of the EWH staff as well. His DBA and I spent some time gabbing about the differences between MSSQL 05 and MSSQL 08. His operations director is one of those IT pros who know exactly the questions that need asking. But I already knew that EWH has good staff. We have worked with his hosting support staff for years. We have a number of high profile customers hosted at EWH and we have always given the support staff high marks for their knowledge, practical know-how and alacrity. There is a reason they are often voted best in class for hosting and data center services.

On Tuesday evening Vlad took me to G & M restaurant for the best crab cakes I've ever had (and I am a crab cake lover). We had a great time telling our stories and filling up on crab and shrimp cocktail. As I told Vlad, it was the best meal I've had in many months - and of course a geek like the Muse thrives on conversations about hacking, retro computers, security vulnerabilities, and the business of IT. As is often the case when I meet someone who has built a thriving business I was able to glean many pearls of practical wisdom and advice that I hope will serve me well.

So here's a big thanks to Vlad and to EWH as well as a hearty recommendation. I hope they have a long run at the top of the hosting food chain.

Migrating XP Pro 32bit to Windows 7 Pro 64 Bit

This post is about the ins and outs of moving from XP Pro 32 bit to Windows 7 64 Bit. I just completed such a move and I have some tips for you that might save you hours of frustration. But before we begin let's get a couple things straight. First, this is not a post about the assets or shortcomings of Microsoft or it's products. Nor is this a forum for you Apple users to tell us all how superior you are because your box is shinier than ours. I actually love Apple products, but Apple users have been known to turn red and swell up like giant angry strawberries if you say anything positive about Microsoft. So if you are one of those folks who is going to have a stroke reading about someone actually choosing a Microsoft product, please stop reading now - or at least have emergency personnel standing by. On a side note, my next hardware project is building an Apple from an Intel box and off the shelf parts - same OS, less than half the cost. I'll write an article on that and hopefully sooth my Apple readers ruffled feathers (it probably won't be shiny though).

Meanwhile, let me first say that I was sad to see my XP pro box go. A computer is more than an OS to those of us in IT. We spend a lot of time and effort making it do things that "regular users" don't have to think about. My desktop XP Pro PC had more than 100 programs installed on it. Many of them I used regularly. I fully expected to have to reinstall numerous programs to insure full functionality. I also expected to have to abandon some items that would no longer work in my new environment. A year and a half ago I moved from one XP box to another using LapLink's PC Mover and it worked splendidly. This time, however, I was nervous about using PC Mover for 3 reasons:

  • I was moving from XP Pro 32 bit up 2 versions to Windows 7 64 bit (skipping Vista altogether).
  • My XP box had Office 2003 on it and I was putting Office 2007 in the new OS without an upgrade, yet I still wanted my outlook settings and email to migrate properly.
  • I was moving my login profile from a local account to a domain account.
I naturally assumed that I would have a great deal of work to do just to get the machine back to the functional state from which it started. Even with my reservations the LapLink docs seemed to indicate it was possible and could be successful so I decided to use the product anyway. Here is my story.


More Entries

Blog provided and hosted by CF Webtools. Blog Sofware by Ray Camden.