Raiser’s Edge, love, what were you thinking

TL;DR: In the SQL Configuration Manager, set the TCP port to a static one, if is trying to use Dynamic, and remove the Dynamic port. On the Server, go into the Advanced settings for the Firewall, and set Inbound rules for both the TCP and UDP ports, allowing them to connect. On the client, set Outbound rules in the Firewall manager for the same. Support Articles at bottom of post.

So, I do third party tech support for a couple of independent schools. Several of them use a program called Raiser’s Edge to keep track of charitable donations, and solicitations. This is all well and good, and the program certainly does the job, but sometimes it makes you want to down a liter of vodka and go home.

The set up: We had to replace a machine that was hosting a networked install of a Raiser’s Edge database. We didn’t realize it was networked until we got the call that they couldn’t access it from their laptop (crap).

The initial troubleshooting: First, we needed to uninstall RE from the laptop (the client computer). It would not go. Finally decided to reboot the machine to make sure RE was not running anywhere. Suddenly, the uninstall went like a breeze.

Now we needed to install RE from the network share on the server. We can’t connect. That took turning off all the firewalls on both machines to fix, but still, we could not get to the “Deploy” folder, that should have been the only available network share on the machine.

Turns out that the installer does not set that folder to be “Available” across the network. There was no documentation for that. Set it to “Available” and boom. I can see the network share.

Run the setup.

Install RE.

Try to run RE.

Start getting database errors. Native Error 17 – Can’t connect to the Database. Call Support and they say “Yeah, its probably the firewall.”

I was too irritated to tell them there was no firewall turned on, at first, but when I mentioned it, they said that it was possible an antivirus had blocked the ports needed. Go to this KB article and open the ports.

Yeah. Fine.

I go through the directions, figure out that SQL has a dynamic port, and follow the directions for that configuration. It doesn’t work. Fantastic.

Finally, Darling Husband o’Mine says, “Why don’t you specify the port it uses?”

So in the end, this is what worked:

  • In the SQL Configuration Manager, set the IPALL port to 1433
  • Stop and restart the SQL service/reboot the machine (I wound up rebooting, but YMMV).
  • On the Server, in Firewall Management, under Advanced Settings, set up Inbound Rules for the following:
    • TCP port 1433 open (or some open port)
    • UDP port 1434 open
  • On the Client, set up Outbound Rules for the following:
    • TCP port 1433 open (or whatever port you used on the server. THEY HAVE TO MATCH)
    • UDP port 1434 open
  • Install RE on the client machine from the Deploy share on the Server.
  • Test the connections.

If any of this makes no sense, here is the supporting documentation for all of it:

Good luck, and may the force be with you on this one.

Offshoring Gone Wrong

Here’s a tale of offshoring gone wrong.  This doesn’t qualify as horribly wrong, nor a disaster, but only because very little money was on the line.

I used to work for a small software company with a well-known product that has a long pedigree (it shall remain nameless, but our major competitor was WinRar).  I actually miss working there —  well, I miss most of it, but I did leave voluntarily.  That’s a story for another time.

We had started translating our primary product into many languages, and we wanted to provide localized translations of our website as well.  In order to save some cash, management decided that we would outsource and offshore the translation of our company website.  Our new president knew of the perfect company to hire, too.

My boss — the VP — and the rest of the engineering and IT team were all a little nervous about dealing with this new company, not only because we didn’t have a great way to verify the work but also because we didn’t have a good relationship with the new president. (Distrust isn’t strong enough a word, but it describes it well enough for this story.)  The first couple of sub-projects came back and looked ok, though, so we started to think we were over-worrying the problem.

Our process was to scrape our own english site, determine which pages and what snippets we would translate, and send those items as plain-text to the translators.  After a couple of days we would start getting the translated documents back and we would build the site.

We had a few bumps along the way, such as getting plain-text documents with an unspecified code-page — we had asked for, but didn’t initially get, UTF-8, but we eventually had them send us the documents in Word to remove character-translation problems — but the process seemed to be working overall.  We ran the the translated documents through Google Translate to make sure the reverse translation (back to English) looked ok, and it did.  In retrospect, it was a little too perfect.

So, fast forward a couple of weeks, we get the third or fourth package back. My boss noticed something… odd on one of the pages. It was worth calling the rest of the team into the office to check it out, stat!

If you guessed that it was an artifact from Google Translate’s page – just a straight copy and paste from browser to Word document that picked up a little too much – you’d be correct.  Cue immediate back-pedalling from the vendor that “it was just that one document” and “the other translations were done by hand” and by native speakers.  Haha, not so much.


Author’s Note: Though this post may seem, at first glance, to be a warning against offshoring, it’s really a warning about hiring executives with too-cozy relationships with vendors.  I’ve seen offshore projects go well and go sour, but the nepotism I saw with the above-mentioned new company president were almost always followed by a bitter taste in our mouths.

The Troublesome Broadcast Message

Back in the good-ole days of Windows NT (circa 1998) I was a member of IT support at a large multi-national corporation.  The campus I worked for was about five thousand people large.

Background: Windows 98/98/NT 4.0 had a neat little utility to send pop-up messages to specific machines.  It was a front end to the net send built-in command, and messages would appear almost instantaneously on the recipient’s machine in nice little window.  (Similar functionality still exists in more recent versions of Windows, but the messenger service no longer starts by default.)

So, one slow day a bunch of us were shooting the shit and getting a little rowdy.  I think there were some flying objects and maybe a nerf gun involved.  One of the upper-level techs, who shall remain unnamed, fired off a message to someone else: “John, look out behind you”.

Only, he didn’t get the machine name right.  He broadcast it to the entire campus.  5000+ machines.

A lesser-known feature of the net send command, and therefore of the messenger utility, was the ability to message an entire workgroup or domain.  To do so, you only need to specify the workgroup or domain name in the recipient box.  And that’s what he did – he intended John’s machine name but the domain was the default in the box — and he forgot to change it.

Hoo boy, that was some trouble, and being a political organization it nearly took the form of someone’s-getting-fired-type trouble.  It took the ‘lizard king’ email storm to finally let it die down completely.  I’ll save that story for another day, when I dig the entire email chain out of archives and obfuscate some details.

The Mysterious Crashing Network

This is a second-hand story, so take it with a grain of salt. My first tech job was for a local computer shop owned by a guy who’d been around a bit. This is his story, from before I knew him, with some added flourishes:

A call came in on a Friday afternoon, around 2 pm, from one of the larger customers with a support contract: the network is down, nobody can get to the fileserver. We’re dead in the water, you have to come out.

Andy makes haste to arrive on-site, but it takes a while due to starting on the other end of town. As he’s arriving, the network has miraculously recovered.

These things shouldn’t just solve themselves, but then again they shouldn’t randomly happen, either (but this is back when hardware was touchier than it is now). Everything checks out now, nothing looks amiss – the netware server is running like nothing happened and everyone has file access again. Chalk it up to solar activity or something.

Next Friday afternoon, 2 pm: same call, same problem. And as Andy arrives, the network is coming back to life. Check again, netware indicates no downtime. Clearly something happened, and un-happened before anyone could try to fix it.

This happens a couple of more times, and the Andy decides that this calls for a pre-emptive strike. He clears his calendar on Friday afternoon and shows up at the client’s site just after lunch. He’s going to wait it out.

Now, this is in the days of Netware, IPX, and 10-base2 cabling: one long common circuit of coax cable to join everyone, running at a staggering 10 megabits. Netware is pretty solid but 10-base2 is touchy: the cable must be unbroken and terminated at both ends, or else it doesn’t work. It’s slow because of all the cross-talk but nobody complains because it’s cheap to install and files are still relatively small. Nobody has email and the internet is unheard of.

Precisely 2:05 pm, on schedule, the network goes out. The office is small enough that he can see everyone, and confirm that there isn’t someone doing something nefarious. Just people going about their business – working on documents, having meetings, neatening up the office before the weekend, watering their plants…

It turns out there was a stress fracture in the cable’s sheathing. It wasn’t causing a problem most of the time. But this crack happened to be behind a secretary’s desk, under her new plant.

Every Friday she would overwater that plant, causing the excess to overflow down the back of her desk and over the cracked sheathing, effectively un-terminating the network.

After an hour or so it would dry and things went back to normal.

I originally posted this on reddit: http://www.reddit.com/r/linux/comments/2p6qy5/4_impossible_bugs_any_other_stories_like_these/cmu7hte and realized that it really belongs here.  Andy, if you read this, you still owe me some paychecks!