Custom UPS External Battery Bank

When we moved into our current house, I had a Matrix 5000 UPS that I used to power my server room.  This 5kVA unit was a really impressive device that ran on 220V and could have its “brain” replaced without powering down the load.  I had four external battery packs for it, which provided over an hour of runtime at full load.  Last summer it stopped working as a UPS for some reason, and then the isolation transformer failed catastrophically six months later:

 

Unfortunately, this unit is no longer made by APC, and there isn’t much information about it online.

I really liked this setup because the UPS was in the garage, fed by a dedicated 220V 30A circuit, and the UPS directly fed a small breaker panel which took two circuits up to the server room. That way, everything was protected by the UPS from surges and small power outages. Since the failure, I’ve been running on small UPSes directly at the critical machines, and using surge protectors for the non-critical stuff.

Recently, I came into a couple of SmartUPS 3000RMXL units for a good price. Two of these have good batteries in them, but the batteries in the other two are on the way out. Since they are “XL” units, they have external battery connections, allowing you to add additional runtime by daisy-chaining 48V battery packs. I decided to construct a large battery bank for this unit and build the connectors required to hook it up.

The batteries I used are C&D Technologies UPS12-370FR, which are 100A/Hr commercial gel cell batteries designed for custom UPS systems. Why did I choose these? Well, because they are designed exactly for this sort of thing.  Oh, and I also got them for free!

I put one of the SmartUPS units on the platform I originally had for the MatrixUPS and changed my previous L6-30 (220V) plug on the wall to an L5-30 (120V) for the new unit. Then I created two 5-15 pigtails to bring two of the 15A circuits off the back of the UPS in to the two legs of the breaker panel. With this configuration, the UPS carries the entire server room, for about 2200W continuous load. With the existing weak batteries, the UPS claims about 10 minutes of runtime (although I don’t believe it).

The internal battery on the UPS connects to a port on the front near the battery bay using an Anderson SB120 plug. This is actually routed to the rear of the unit where two SB120 plugs are connected by a continuity module in normal operation. The continuity module simply connects the UPS to the internal batteries.

In order to get the external batteries into the mix (and keep the internal ones there for a buffer during maintenance) I created a cable that I expect is quite similar to what APC provides for use with its approved external packs. This cable is two 8AWG conductors for each terminal on the UPS side of the circuit, which goes to the external battery pack. An additional short 10AWG jumper also exits this connector and goes to the internal battery side. The finished cable looks like this:

A close up of the UPS-side connector:

Note that, when looking at the back of the UPS, the SB120 plug closest to the line input and circuit breaker is the one connected to the UPS’ charging and inverter circuits. The one furthest from this is the one that loops to the internal battery. I made sure that the heavy gauge wire was connected to the UPS side, with the jumper to the battery so that you don’t end up pulling all your current through the smaller jumper wire. The SB120 plugs are designed to be stacked in this manner, and have holes designed for bolts as pictured.

Next, I stacked all four of the external batteries on top of the UPS, taking care to locate them near the edges of the unit where there is most likely to be internal support for the weight. With more room and care, this could have been done better, and I may change it in the future. Once the batteries were physically placed, I started to link them up in series with short 2AWG jumper wires:

In one of the links, I added a 200A fuse to make sure that any short across the terminals wouldn’t result in a huge amount of current flow and/or an exploded battery. I also then attached the battery cable to the 48V side, leaving the cable still disconnected from the UPS itself:

Just in case you are wondering, 48V DC is plenty to give you a noticeable shock across a sweaty arm (trust me), so care should be taken once you get to this point.

After checking my connections one more time, as well as the voltage and polarity of the output at the connector, I plugged it into the UPS. Immediately the UPS’ fans kicked on to cool the charging circuit as the batteries were a few tenths of a volt lower than the UPS keeps them at normally. This subsided after a minute or two.  At that point, I reconnected the internal battery on the front. The finished connection looks like this:

Obviously I still need to do some cleanup.

Next, I logged into the UPS’ web interface and told it that an additional four battery packs were connected. The internal pack contains two sets of four 7AHr batteries in parallel to give about 14AHr of capacity at 48V. Four of these would be 56AHr, which I figure is probably close to the limit of what APC would recommend plugging into a single UPS, and may be closer to the actual capacity of the new batteries at high load. All lead-acid batteries are rated at one or two capacities, given a specific load. The faster you pull current out of them, the lower total capacity they have.

After performing a calibration test, the UPS now reports (and actually provides) 1hr45m of runtime with the additional pack. This is pretty good for a 2200W load, especially considering that the non-critical stuff gets shut down almost immediately after a power failure, lowering the load and increasing the runtime.

I should also point out that I have a very plain 3500W camping generator that doesn’t use an inverter and produces very mediocre output. It’s fine for most things (including electronics) but most UPSes are unable to handle and correct the input when running on the generator. These models have the ability to set the input tolerance very low, which allows them to run happily when plugged into the generator. The MatrixUPS was never able to do this, even on the lowest sensitivity setting.

I have a few more pictures, and high-resolution versions of the above in my gallery.

Posted in Hardware Tagged ,

The IBM “Jeopardy Challenge”

Most people don’t think of working at IBM as a very exciting endeavor.  At times, I’d be hard pressed to convince them otherwise.  However, there’s something very special about what a large (country-sized) company with a wealth of talent and resources can do to change the world and the way we think about technology.  When I was an intern at IBM in 2004, I had the opportunity to work with an artifact of one of those game-changing demonstrations of technological prowess.
 
I was working for the “multimodal technologies group,” specifically on voice-enabled web browsers for mobile devices.  While it really didn’t pan out (the way we were doing it, that is), the idea was that a voice interface to a web browser on your cell phone or PDA could end up making navigation much easier while walking, driving, etc.  We ended up creating a few applications to demonstrate this functionality in ways that would grab people’s attention.
 
One of the things I worked on was a voice-enabled chess game.  This was not just any chess game, however, because it had some “help” at the server end.  Sitting on a shelf at the Austin site was one of the nodes left over from the Deep Blue machine that beat Garry Kasparov at chess in 1997.  It was only one node of many that actually played the game together, but it had the custom “chess hardware” in it and the software still installed.  I coded up something that would put a web services API on the chess application and allow a web client to play against the computer.  After we got it working, one of the people in the lab figured he could probably beat just a single node, so he gave it a try.  The game was over pretty quick.
 
IBM recently announced their proposed “Jeopardy Challenge” which has a similar goal: beat a human at something mental by brute force.  However, this one makes Deep Blue look like a walk in the park, if you ask me.  Take a look:
 
 
It seems like an impossible task, but we have to get there at some point I suppose.  I’ll be watching closely to see what develops and how “Watson” does!

Posted in News Tagged , ,

CHIRP gains Yaesu VX-7 Support

Until now, CHIRP development has been focused on ICOM products.  In fact, you might even be tempted to think that the “I” in CHIRP was for ICOM.  Lately I’ve been using my Yaesu VX-7 and VX-8 radios a little bit more for various reasons (i.e. they’re a lot less expensive if I lose one and they’re smaller as well).  I’ve gotten so hooked on being able to program the radio via computer, that having to use the (hard to press) interface buttons all the time has been rather annoying.

I’m pleased to announce that CHIRP supports the VX-7 in the latest beta (0.1.10b9) and there will be a formal release coming up soon with it as well:

 

Reverse engineering the radios’ memory format gives you a lot of insight into how each model is designed.  It’s interesting how different they often are, and how tightly they pack bits of information to make efficient use of space.  I must say that I have a lot more confidence in the ICOM designs than the Yaesu from what I’ve seen thus far.  In the trial-and-error period of figuring out how to program the VX-7, there were many times where writing some invalid data into memory would cause the entire radio to lock up.  Often times this was so severe that a full reset was required to get it back to the point at which it would agree to power on into clone mode.  The ICOM radios are much more intelligent about it and will abort the clone as soon as you write something that isn’t valid.

I’ve got a VX-8 programming cable on order, so I hope that in a couple of months I’ll be able to claim support for it too.  Stay tuned! 

Posted in Radio Tagged , ,

Jabber.com DNS record issues

Recently a friend of mine signed up for and started using a jabber.com account to chat with me.  I have run my own jabber server for almost ten years now and I’ve never had problems with the server-to-server (S2S) aspect until now.  For some reason, the jabber.com SRV records seem to fail to resolve at times, which was occasionally killing the jabber.com S2S connection with my server.  It seemed like the connection would occasionally recycle, which caused my server to lookup the SRV records.  If that failed (which was happening multiple times per day) then I would be unable to communicate with jabber.com contacts for several minutes.  Their status showed as something like “404: Server not found”.  In the logs of my Openfire server, I saw items pointing to the failed DNS lookups.

After asking what to do in the Openfire forums, someone mentioned that they had the same issues due to sporadic lookup failures on the jabber.com SRV records.  They suggested spoofing the necessary records to fool my server into connecting to the proper IPs without having to perform an actual lookup.

It is pretty silly that I have to do this, but I ended up making it work by running a local copy of BIND and hosting the jabber.com zone myself internally.  This seemed to resolve the problem for me, which is good.  Later when I was working on a different project, I noticed that dnsmasq now has the ability to spoof SRV records as well.  I decided to switch to using it to do the job instead of bind.

My server is running CentOS 5.x, which has a dnsmasq package available.  I installed it via yum:

yum install -y dnsmasq

Next, I edited the /etc/dnsmasq.conf file and added the following lines:

expand-hosts
resolv-file=/etc/resolv.masq
srv-host=_xmpp-server._tcp.jabber.org,hermes.jabber.org,5269,1
srv-host=_xmpp-server._tcp.jabber.com,jabber.com,5269,1
srv-host=_xmpp-server._tcp.jabber.com,denjab2a.jabber.com,5269,1

Finally, I put the following entries in /etc/hosts:

208.68.163.220   hermes.jabber.org
216.24.133.9 denjab2a.jabber.com
216.24.133.14 jabber.com

Note that the host entries may become stale, so some babysitting of those may be required.  I decided to override jabber.org as well since I saw a few similar error messages in the logs for that domain as well.

Next,  you need to put your own DNS servers in /etc/resolv.masq so that dnsmasq knows where to forward normal requests.  Something like the following would work, substituting your own DNS server IP addresses:

nameserver 1.2.3.4
nameserver 5.6.7.8

Finally, you need to tell your system resolver to use the local machine (running dnsmasq) for queries.  Set the nameserver in /etc/resolv.conf to localhost:

nameserver 127.0.0.1

Now you can start dnsmasq and configure it to start at boot:

service dnsmasq start
chkconfig dnsmasq on

A restart of openfire (or whatever you’re using) would probably be appropriate as well.

Posted in Miscellaneous

Oregon ACES

In just about all areas of public safety, training and certification plays an important role in making sure that the people you trust with critical tasks in emergency situations are able to perform.  They have to have to knowledge necessary to make informed decisions, but they also have to have some amount of experience executing their duties before it becomes a life-or-death situation.  It is for this reason that police and fire are constantly training to increase their effectiveness, which often involves a periodic recertification.

Amateur radio operators looking to help with emergency communications usually expect to interact with various public safety organizations for planning, training, and of course, actual emergency situations.  Unfortunately, we rarely hold ourselves to the same standard of those that we hope to serve. There are some standardized training courses available via the ARRL, but they don’t require you to own or have ever operated a radio, nor do they require you to do anything other than sit in front of a computer for a web course.  The material in the course is good, but reading through it does not mean you are magically able to execute in an emergency.

Can you imagine if the only bar for being a police or fireman was a voluntary web course?  I don’t think that people would be willing to place their lives in the hands of such individuals, yet we do exactly that when we secure ourselves as the backup communications provider for a police, fire, or medical agency.  Can’t we do better?  I think we can.

Over the past couple of months, I have been working with a few highly-skilled people to develop a program called Oregon ACES.  The idea is to define several levels of training and certification for amateur emergency communications personnel, to provide those courses and a certification registry for those who have taken them, as well as promote frequent training opportunities in the area.  We have already enlisted a healthy list of supporters in Oregon, including multiple city and county emergency management departments, other volunteer groups, as well as a couple of counties in our neighboring state of Washington.  Additionally, we have nearly completed the process of being accredited by the NRCEV, a national group that recognizes local training programs and certifies participants with additional credentials.

We recently published our basic course outline, and have it open for public comment.  If you’re interested, please look it over and use the linked form to provide feedback about anything we missed or should elaborate on.  Even if you think it’s good as-is, let us know.  The idea here is to provide a healthy amount of classroom instruction, as well as some mandatory group and individual skills demonstration.  The basic level is aimed at a volunteer who is likely to be in the field with just a VHF/UHF handheld radio and maybe an auxiliary power source.  The advanced and other certifications will include things like HF, digital modes, and other topics.

Many existing hams involved in emcomm will wonder something along the lines of “How is this different from the ARRL course?” or “What relation does this have to ARES?”  The FAQ page should answer all of those, and if it doesn’t, let me know.

By the way, you do not have to be in Oregon to provide feedback on this program.  We’ve already received comments from Washington, Pennsylvania, and Texas and would welcome any others!

Posted in Radio Tagged ,

Survival Guide for Linux Hams

This is a bit of old news, but since it’s now available publicly, I suppose it’s a good time to post about my recent article in the January 2010 issue of Linux Journal.  I was approached late last year to write an article for a “Ham Radio” edition of the magazine.  There was no particular topic, so I suggested an “Amateur Radio Survival Guide for Linux Users” to explore a few common things that Windows users might take for granted.  I covered a little bit about basic contest logging, TCP/IP over packet radio, D-STAR, APRS, amateur satellite tracking, and SDR.  The goal was to provide some starting points for a Linux user that might be a new ham, but it would also interest existing hams that are looking to move to Linux for some of their operating activities.

I should also mention that I got some valuable proofing from Jason, NT7S prior to submission.  Thanks Jason!

Posted in News

2010 Eagle Cap Extreme Dog Sled Race

Last week, Taylor and I were in Joseph, OR for the 2010 Eagle Cap Extreme dog sled race.  This is the second year that the race administration used ham radio as its primary communications mechanism.  Without us, they are limited to mediocre satellite phone coverage or use of the unlicensed short-range radio services.

The area is quite remote.  There was some cell coverage in Joseph itself, but just yards outside of the tiny town there is no signal at all.  Mountains and canyons make satellite phone coverage spotty and completely sub-optimal, even at the extreme cost it takes to utilize them.  Over 250 volunteers spread out over a dozen locations across the 200 mile course means that there is a lot to coordinate.

The local hams utilized a pair of echolinked repeaters to blanket the area with communications capabilities.  The Joseph (Oregon) repeater provided surprisingly impressive coverage for the northern bits, while the McCall (Idaho) repeater covered the southern part.  With very few issues throughout the week, many places were able to get into one or both machines with a handheld or modest mobile radio.

I worked the 1600-0000 shift up at Salt Creek Summit.  This snow park at almost 6000 feet could hit just about everyone on simplex and provided a good backup in case either of the repeaters went down.  A ham from Pendleton brought his fifth-wheel travel trailer and parked it there for operating convenience for the duration of the event.  A Honda EU-3000i generator running 24×7 provided power for the radios, lights, and other conveniences.

This was the first year that the event attempted to utilize any sort of amateur digital transmission, on my recommendation.  I brought a packet BBS-in-a-box (a small JNOS setup on Linux) that I ran at the summit.  We also provided an old laptop and radio setup for Race Central running Outpost.  I had coordinated with Chris at Ollokot ahead of time to get him going with a similar configuration.  Since the primary race communications were on 2-meters, we hoped to use UHF for the packet system to keep interference to a minimum.  Unfortunately, we found that 35 watts from a standard mobile radio on 440 MHz wasn’t enough to fight the challenging terrain and long distances between the stations.  We had to resort to using a 2-meter frequency, which meant that packet wasn’t really usable except during periods of inactivity on the voice channel.  Thus, it wasn’t the resounding success I had hoped it would be, but the scenario was really a losing battle being on the same band.  Next year, we hope to give 6-meters a shot, which we think might be a real solution.

Taylor worked the same shift at Race Central.  She put her technical knowledge to work helping them to establish email communications with the external entities, as well as organizing the information that was flowing in and out.  She did an excellent job of making the best of the packet system from the “other side”;  We exchanged many messages during the race, timed appropriately to avoid interference with the voice channel.

We were both prepared and hoping for a bit more of a camping scenario.  At one point, it was expected that we wouldn’t have the trailer up at the summit and that we’d be in a tent.  Thus, I was over-prepared when I found myself sitting in a heated trailer with a microwave and a recliner.  Taylor, being at Race Central, was always in a heated building.  That area of the state is currently far below its expected snow level, too, which we weren’t anticipating.  All of the other checkpoints, however, were far more isolated, with only snow machine access in and out of the camps.  I think we’re both hoping that we can participate in one of those next year.

We had a blast the entire time.  It was exhausting, rewarding, and a real break from everyday life.  We’ll definitely go back next year, and I’ll be much more familiar with the terrain and operation of the event so that I can have a better plan for getting a reliable digital system up and running.

Posted in Radio Tagged ,

All hail Winlink 1988!

Winlink 2000 is a hot topic in the amateur community right now.  It provides a mechanism to access Internet email over several different transports, including an Internet connection, local V/UHF packet radio, and long-distance HF Pactor.  The idea is great, and in practice, it works reasonably well.  There are a lot of complaints to be made about the design and implementation of the system, but at the end of the day, those guys put in the time, effort and money and made it work.

One of the (many) complaints I have is their use of the ancient B2F forwarding protocol.  It’s fine to use that over slow Pactor links (I suppose), but why aren’t we just using something like POP3 for the Internet hops?  Rather silly, I think.  Anyway, one of the design points of the B2F protocol is the use of an even more ancient compression algorithm called “lzhuf”.

The algorithm and code for this was written in Japan in 1988 to run on a 16-bit machine.  The source, and many disparate alterations since then are sprinkled around the Internet and are easy to find via google.  However, most people use either the command-line LZHUF_1.EXE file that has been around forever, or the DLL-ized version that the Winlink applications deliver and use.  This effectively limits its use to Windows machines (and dosemu-installed Linux boxes).  When I tried to compile several of the variants under Linux, I found that the code makes a bunch of assumptions about type sizes and thus crashes and/or fails to decode compressed text as a result.  In fact, depending on where it fails, sometimes it runs off in an endless loop writing garbage to the output file until you kill it!

After spending a lot of time trying to find someone who had fixed the code to compile on a modern 32-bit system, I finally found a copy of the source that would compile on my system with g++ and actually run properly. I found the updated source code here and have archived a copy of the source on my system, as well as a static Linux binary in case you don’t want to compile it yourself.  If you run the binary with no arguments, it will print a usage message.

Now, you might ask “Dan, why does Winlink 2000 use this old, unmaintained, fragile, and obscure compression algorithm?”.  Well, in the days of freely available code, algorithms, and libraries to do advanced compression, encoding, etc, I can assure you that the top notch Winlink engineers have a good reason.  ….Right?  I figured that this obscure gem from the golden age of 4MHz PCs must be an undiscovered compression miracle, one that makes the extremely slow Pactor connections able to transfer data as efficiently as possible.  So, I decided to compress some test files with lzhuf, as well as the freely-available-and-well-regarded gzip and bzip2 algorithms and compare the results.

As input, I used the lzhuf source code itself, which is about 19KB in size.  That’s a pretty good sized email even with a potential file attachment.  Below are the results:

 Method Size  Reduced size by:
 Uncompressed   18,917 (19KB)   (n/a)
 lzhuf  5,385 (5.3KB)   72%
 gzip  4,903 (4.8KB)   75%
 bzip2  4,589 (4.5KB)   76%

So there you go: with bzip2, you’d get almost a kilobyte less data to transfer than you would using lzhuf.  Does a kilobyte really matter?  Well, Pactor-I is 200 baud (at most), with very small block sizes.  Yes, I think I’d rather save that kilobyte.

So, I ask the Winlink 2000 developers: Why not move to bzip2 compression?  It’s free.  It’s widely available.  It’s considered one of the best.  You put “2000” in your name to sound like the system is new, fresh, and modern, why not use a compression algorithm to match?

Posted in Radio Tagged ,

NTS messages in Emacs

Passing messages via the ARRL National Traffic System (NTS) has recently caught my fancy.  If you don’t know anything about it, you should take a look at the NTS page on the ARRL website.  In short, it’s a network of hundreds of ham radio operators that tirelessly meet multiple times each day on local, regional, state, and transcontinental “nets” (conference calls on the radio) to pass traffic around.

Back before email and unlimited long distance, you could go to your neighborhood ham and give him a message for your mother across the country.  That ham would insert it into the system and all the hams in between would pass it along until it reached the proper area, at which point a ham local to the recipient would deliver it in person, by mail, or phone.  Nowadays there is not much real traffic to pass, but all the involved radio operators still meet multiple times a day, 365 days a year to practice and keep the system oiled and working.  If we were to ever be set back to the stone age (communications-wise) the hams would be ready to pass a large volume of messages.

Anyway, it’s very important to be able to copy down the message into the proper form, which is an ARRL radiogram.  You can certainly do that by printing hundreds of those forms and copying by hand, but that gets wasteful and is hard on your writing hand (I type much faster than I write).  For a while, I was copying the messages into a plain text file and then quickly counting the words for the checksum manually, but decided that was rather silly.

So, I decided to see if I could write something in elisp to help me out.  I’ve never written anything like a major mode or user interface, so it was a learning experience.  I was successful in writing nts.el, which gives me a fillable form that helps correct the format, validates the checksum, and records the received and sent timestamp automatically.  It also helps me manage the messages by keeping them organized into “Active” ones that need to be passed along and “Completed” ones that have been handled and need to be archived.  It looks like this:

 

I don’t expect there are many Emacs users that also participate in the NTS system, but if so, feel free to take a look at the code.

Posted in Radio

ID800 Magnetic Head Modification

The ICOM ID-800 is a very common D-STAR radio, and for good reason: it includes the digital functionality, dual-band operation, and a remote-able head.  It’s much easier to mount a small head unit in view than an entire radio, with all the antenna and data cabling that is required.  In my Jeep, I have the ID-800 base unit under the driver’s seat, bolted to the floor, with all the cables neatly run under the carpet from the various places (antenna, battery, GPS).

Until recently, I had the head mounted with some industrial-strength adhesive velcro to the center of the visor trim panel.  There was a nice blank spot there and it’s a good location for the head, visibility-wise.  Unfortunately, ICOM does not make (or no longer makes) a mounting plate for the radio, which means you’re pretty much stuck with some sort of adhesive mounting solution.  This is bad for many reasons, but most notably:

  1. The head won’t fit back on the base of the radio with the velcro (or the messy gook the adhesive leaves behind) in place
  2. Nobody wants to glue stuff to a $500 radio
  3. In the summer, almost any adhesive that will be removable down the road will turn to grape jelly in the heat of the summer

In fact, point three in the above list is what drove me to make a change.  I had replaced the velcro once already this summer, and didn’t feel like making the process a twice-yearly event.

In the newer radios, ICOM has made a huge improvement: magnetic head units.  By affixing magnets to the back,all you need to do is mangle a piece of sheet metal into place and stick the radio to it.  When you want to move it, you need only tug it out of place.  No gook, no mess, no problem.

So, while I was chiseling the latest batch of failed grape jelly off of my ID-800’s head this weekend, I decided to see what I could do to make my unit magnetic.  In order to qualify as an improvement, I set the following criteria:

  1. It has to be a clean modification, such that it looks reasonable and is worthy of sale someday
  2. It has to be adhesive-free and gook-free
  3. It has to allow the head to be mounted back on the base unit, should that be necessary someday

What I came up with met all three and is a major improvement, in my opinion.  Note that this may void your warranty, destroy your radio, and/or wipe your hard drive (the magnets are strong).  However, it’s pretty mild and works well for me.

First, I opened up the head.  I was surprised and pleased to find that ICOM used actual screws to hold the unit together instead of explosive plastic clips like most of the junk nowadays.  Here’s what it looks like inside:

Note all the extra space in the back cover.  It may be hard to see in this picture, but it’s actually quite roomy back there.  I went to Surplus Gizmos hoping to find some of the uuber-small uuber-strong rare earth magnets that look about like a watch battery.  Unfortunately, they didn’t have any of those, but they did have a big pile of strong kidney-shaped magnets from hard drive motors.  They looked like this:

  The backing plate isn’t really part of the magnet, the two small kidney-shaped things are where the power is.  The magnets themselves will happily separate from the plate and are quite thin.  Their outline is larger than I wanted, but they worked out.  I pulled a bunch off and loaded up the back cover with them:

 Now, you’ll notice that I have the cover sitting on a piece of sheet metal.  These magnets are so strong that they will literally jump across the cover to mate with each other.  The sheet metal gives them something to pull on below which helps keep them in place.  It took me several minutes to get these to sit flat in the cover, which is rather frustrating.

The magnets are covered in metal, so I wanted to make sure I insulated them at least a little bit from the board they would be facing from the front half of the control head.  Since this part won’t be seen, once it’s together, I cheaped out here and cut some thin receipt paper to cover each magnet:

 Just enough to insulate each magnet.  The receipt paper is very thin and didn’t present an issue cramming it in there.  Finally, I needed a way to keep the magnets in place so they didn’t wander out of their places when vibrated.  I could have used (and may go back and use) some heat-tolerant glue like epoxy to affix the magnets in place, but this was an experimental thing and epoxy is very permanent.  At this point, I could probably put the head back together in such a way that the warranty department wouldn’t notice or care (unless there was an issue with the head itself).  Epoxy eliminates that ability forever.  Anyway, I could have done just about anything here, but opted for some cotton.  The cotton won’t turn to gook and is relatively inert and harmless, but provided enough tension to hold things in place.  It’s just what came to mind first.

Anyhow, I screwed the case back together carefully to avoid the magnets jumping out of their places until under pressure.  After it was all secure, the head is very magnetic from the outside and is more than strong enough to hold itself in place on a piece of sheet metal.  I’ve got a metal mount bent into place in the Jeep, and the head is now stuck there by the magical force itself.

Posted in Hardware, Radio Tagged ,