Git, The Kernel And Extenral Drivers: Look Ma I Can Version Magic

I was just compiling a module I’ve been given outside the kernel source tree
got an error trying to load it:

version magic '2.6.x.y-7aac9b1d2 mod_unload' should be '2.6.x.y mod_unload'

Weird — as I’m compiling it against the same source tree that the kernel was built from
And then I realized: the appended string is a lot like a git hash! (hey not all hexadecimal strings are alike!)

turns out that if you have CONFIG_LOCALVERSION_AUTO turned on in your .config and if the kernel you’re compiling is detected to be git controlled the resulting binary will have it’s version magic string appended with the git current hash.

and now I know that this also applies to modules you compile against a kernel tree like this which are governed by a git repo.

good to know.

Posted in "software enginerring", Drivers, Kernel, source control | Tagged , , , | Leave a comment

Wifi roaming on the blackberry

here’s a problem I have with my blackberry that was not annoying enough to encourage me to research until now:it will not automatically connect to a wifi access point.
I can find and setup a wifi network just fine and have already setup one for my home AP one for the work AP even one AP for my favorite bar (that I have not yet visited ever since).

However I had to manually navigate every morning at work and every evening at home to the wifi menu in my blackberry and change the active AP profile.

needless to say that I have under utilized the cost saving that wifi is by frequently forgetting to switch APs.

turns out it’s real easy to cure: just had to navigate to ‘Wi-Fi Options’ and uncheck ‘Enable single profile scanning’.

That’s it — free bandwidth here I come.

Posted in productivity tools, remote, Uncategorized, woraround | Tagged , , , | Leave a comment

Guru Meditation: VirtualBox and Commodore 64

Just got this error "Guru Meditation -2701 (VERR_VMM_RING0_ASSERTION)" on my virtualbox-ose 3.2.8 hosting windows 7 on my Ubuntu 10.04.1 machine.

Looking around for the error it turns out that the error text itself is a homage to an old Amiga system error which reminded me of the book “On the Edge: the Spectacular Rise and Fall of Commodore” I’ve read last year.

I’ve taken all my first steps in the computing world (except maybe using a modem and a BBS) as a teenager on my Commodore 64 which included: gaming, basic programming, assembler programming ,machine language programming and very first steps in digital music composing.

Reading the Narrative that brought together this marvelous magical machine that thought me so much of what I know and like today was absolutely facilitating and beyond, having learned what random marketing and social interaction influenced choice and shaped the technology world as we know it today was mind opening as well as telling that history is not only shaped by inventors and geniuses but also to a varied degree by petty rivalries and personal grudges.

At point it seems that the narrative is slightly biased towards the engineers perspective of things but I’ll be the last to complain about that, between the competing narratives I would much rather hear the engineers and the designers side.

I found the book to be a real page turner and informative at the same time as the engineering drama unfolds.

Incidentally after I’ve hit the said Guro Meditation bug trying to start VirtualBox again by issuing:

vboxheadless -startvm windows7

Resulted with the following error in the VirtualBox log (if you’re looking for it note that it is not created by default under /var/log but rather in the running user home directory)

PDM: Failed to construct 'e1000'/0! VERR_SUPDRV_INTERFACE_NOT_SUPPORTED (-3701) - The component factories do not support the requested interface.

This was cured by reloading the drivers as follows:

sudo modprobe -r vboxnetflt
sudo modprobe -r vboxnetadp
sudo modprobe -r vboxdrv
sudo modprobe vboxdrv
sudo modprobe vboxnetadp
sudo modprobe vboxnetflt

Posted in book review, Command Line, open source, productivity tools, woraround | Tagged , , , , , , , , , | Leave a comment

Tarball as a remote git repo

I have two computers that do not share a network link, let’s say for simplicity sake that one of the machines is disconnected. I however do want to share my ever growing .emacs.d directory of emacs-lisp goodness.
Since the emacs configuration is actually code the obvious choice is to use source control management software to keep it in sync and among the newer SCMs I’m most familiar with Git.
Ideally I wanted to mimic the way git works but without a network, that is I wanted to be able to simply push my changes by issuing a simple enough command.
I came across this article which suggests to create a remote repo on a usb thumb drive and push and pull from it. It’s a very good idea and I’ve used it a couple of times.

However due to the relatively long period between my synchronization sessions I had to refer back to the article to remind myself of the steps, which after a few times made me want to script the process.

Having sat down to plan the automation it occurred to me that what I really wanted is to have the ability to dump to a tarball and update from a tarball as if that was a real repo, I didn’t want to be looking for that last thumb drive that I’ve created the ‘remote’ repo on.

So my requirements became simple: create a sync-object which I’ll be able to serialize and de-serialize myself in any way,  independently from the script and once de-serialized allow update an older repo from, and then dispose of the object. coming to think of it this is really like a ‘patch’ for a git repo.

Below is the script I came up with, the usage, I believe is quite straight forward:
to create an update object called dotemacs.osx.tgz I type the following: push ~/dotemacs.osx.tgz

to update an existing git repo from dotemacs.osx.tgz I type: pull ~/sync/dotemacs.osx.tgz



function run_cmd ()

    if [ $trace_commands -eq 1 ]
        echo "run: $cmd"

    if [ $execute_commands -eq 1 ]
function push_to_repo ()

        if [ "$#" -lt 1 ]
            then die "1 arguments required, $# provided (type git remote for a list of repos)"
        run_cmd "git push $1"

        # trickery to find the repo dir based on the repo name
        # git remote shows a list that needs reducing there are two entries per repo
        # as follows
        # osxproxy      file:///foo/temp/remote_repo/dotemacs/ (fetch)
        # osxproxy      file:///foo/temp/remote_repo/dotemacs/ (push)
        # the grep -m 1 (max count) reduces the list to the first enrty
        # awk uses the second field where space is the field seperator
        # and finally sed removed the 'file://' prefix

        REPO_PATH=`git remote -v | grep -m 1 $1 | awk '{print $2}' | sed -e 's!file://!!1'`
        #remove the trailing slash
        REPO_NAME=`echo $REPO_PATH | sed -e 's!/$!!'`

        D_NAME=`dirname $REPO_PATH`
        B_NAME=`basename $REPO_PATH`
        #tarball the bastard
        #first get out of the way any old copies
        REPO_DIR=`dirname $REPO_PATH`
        # echo "repo is $REPO_PATH and dir is $REPO_DIR"
        run_cmd "mv $REPO_NAME.tgz $$(date +%Y%m%d).tgz"

        run_cmd "cd $D_NAME"
        run_cmd "tar -czf ./$B_NAME.tgz ./$B_NAME"
        run_cmd "cd $PREV_DIR"

        #copy the file to the given path
        if [ "$#" -eq 2 ]
            run_cmd "cp $REPO_NAME.tgz $2"

function pull_from_repo ()
        run_cmd "git pull $repo_name master"
function create_repo ()

        if [ "$#" -eq 2 ]
            REMOTE_REPO_DIR="/mcradle/temp/remote_repo/$1_$(date +%Y%m%d)/"
            #echo "remote repo dir is $REMOTE_REPO_DIR"
        run_cmd "mkdir $REMOTE_REPO_DIR"
        run_cmd "mkdir $REMOTE_REPO_GIT_DIR"
        run_cmd "git clone --local --bare --no-hardlinks . $REMOTE_REPO_GIT_DIR"
        run_cmd "git remote add $REMOTE_REPO_NAME $REMOTE_REPO_GIT_DIR"
        run_cmd "git push $REMOTE_REPO_NAME master"
function destory_repo ()
        run_cmd "git remote rm $REMOTE_REPO_NAME"
function push_to_tarball ()
    temp_dir="/tmp/$(basename $0).$$.tmp/"
    run_cmd "mkdir $temp_dir"
    #repo_name="autopush_$(date +%Y%m%d_%H%M)"
    repo_name="autopush_$(date +%Y%m%d)"
    create_repo $repo_name $temp_dir
    push_to_repo $repo_name $copy_to
    destory_repo $repo_name
    run_cmd "rm -rf $temp_dir"
function pull_from_tarball ()
    #this expects a tarball as a paramter
    TARBALL=`readlink -f $1`
    temp_dir="/tmp/$(basename $0).$$.tmp/"
    repo_name="autopull_$(date +%Y%m%d)"
    run_cmd "mkdir $temp_dir"
    run_cmd "cd $temp_dir"
    run_cmd "tar -xzf $TARBALL"
    repo_file=`readlink -f *.git`
    run_cmd "cd $prev_dir"
    run_cmd "git remote add $repo_name $repo_file"
    pull_from_repo $repo_name
    destory_repo $repo_name
    run_cmd "rm -rf $temp_dir"

if [ "$#" -lt 2 ]
then die "2 arguments required, $# provided."

case "$1" in

        push_to_tarball $2
        exit 0

        pull_from_tarball $2
        exit 0
        echo "Usage: cd to the same dir where the .git directory is and type $0 {push new_tarball|pull existing_t\
        exit 1

exit 0


  1. It needs a little bit of cleaning up, there are left-overs of experiments I did before landing on the design which makes it over complicated, need consistency when it comes to variable names (case for one)
  2. I don’t understand the effects of creating a new remote repo every sync and then destroying it, don’t know what it takes down with it and if git can handle many of these joining and leaving the repo.
  3. It works for me, and is fast and cool
  4. If had to do it all over I probably would have used python.
  5. This Is sunny side up, error checking could be useful
Posted in open source, remote, source control, sync, Technology, Uncategorized | Tagged , , , , , , | Leave a comment

If Twitter killed bloglines will facebook connect kill OpenID?

I’m currently considering to start using openID, not exactly an early adopter I know.

I have been looking around for best-practice as known and blogged so far. My conclusion from the online research is that “OpenID delegation” is a safe way to  go, in case one’s OpenID Provider is shutdown say like Vox (yet another twitter-killed-the-internet case).

The thing with OpenID delegation is that it requires one to own a URL or at least control it in order to advertise the redirection to one’s openID provider of choice.

WordPress as in does not function as a Delegation agent (I do know that local wordpress  installation  will allow the headers to be modified hereby making one’s blog function as an OpenID Delegation agent).

EmailToID seems like a good solution as it does provide a level of indirection and does not require a control of an URL,  However looking further into this it seems that this is/was a service offered by a company called Vidoop that, as it seems, died – largely unannounced and at some later point bought/reincarnated  in another company called confident

The EmailToID service itself as of today seems to be up and running,

However  I’m confused regarding what entity is currently  operating the service if at all.

On top of that not wanting to go through the trouble of setting up and EmailToId account I am unable to  understand if one can change one’s OpenID provider once it is set which is the whole point of redirection, this article which seems to have published circa the time that the service has been launched makes me think that it might be possible.

To me is seems that the story behind Vidoop tells not only the story behind the company rather it tells the story of the smart internet start-ups post the internet bubble era and the great black hole called Google and Facebook.

Posted in "The Interweb", bloglines, OpenID | Tagged , , , , , , , , | Leave a comment

In my world RSS is not dead.

I’ve just noticed that Bloglines is going defunct. This will force me to renew my quest for a good RSS reading platform. I sure hope that RSS is not dieing like the Bloglines obituary seems to suggest. Following are my requirements for a RSS reader

  1. not owned and/or operated by Google
  2. Cloud based.
  3. OpenId login support
  4. Runs under Linux, OSX  and Windows.
  5. Mobile friendly render nicely on both iPod Touch and BlackBerry.

The obvious choice disregarding requirement #1 would have been Google reader but I try to get rid of my current Google dependency let alone deepen it.feedly could have been great if it was not built entirely around google reader.If none of the web apps will fit my needs I’ll consider coding my own Online RSS Reader.

Any suggestions?

p.s. my next quest (and an ongoing research target currently) is to replace itunes as a PodCatcher (podcasts client if you will) but I really have not found a better PodCatcher yet.

Edit (October 2010):  I am now using found via this question at the excellent webapps stackoverflow spin-off, it is not owned by google, it is a webapp and supports OpenID login, as for mobile friendliness we shall see.

Gonzo Media

p.s.s If are right and it’s going to be the “Social Media” that will replace RSS then I suggest we rename “Social Media” to “Gonzo Media“.

Posted in "The Interweb", Technology | Tagged , , , , , , | Leave a comment

Sync Early, Sync Often: Unison, Plink and SSH

I like adjusting software to fit what ever it is I’m trying to achieve precisely , I can handle a few popular programming languages quite well and other enough to tweak, kludge and patch as necessary.

when I installed unison, my file syncro of choice,  and tried to sync my old windows XP with my (almost) brand new MacBook Pro I wasn’t thrilled to see this error message displayed on my dos prompt

the path X: is a root directory

See the reason being that Unison having been open and all  is written in Caml,  Maybe I’m a snob  but I don’t anticipate needing these OCaml coding skills anytime soon. looking around at what has been written in Caml:

Zenon is an automatic theorem prover written in OCaml.

…I guess I’ll write my own Theorem Prover in Python should the need arise.

Anyway I could google all the way to the source code but still did not help me much with figuring out how to solve this.

The mission at hand: I’m trying to sync an entire drive  from a XP machine to a OSX machine, I have been able to sync the very same directory with my now defunct Gentoo Machine.

This brings me to another issue I have with unison: it forces you to run the same version on both machines you sync. this is pretty annoying: I had a perfectly working setup on the XP machine but since I had to Install Unison on my OSX and wanted to use MacPorts which in it’s turn installed some later version of unison, sure I could have mocked around with Macports and force a specific version, but forgot that Unison is so finicky and lack decent backward compatibility.

While generally i tend to subscribe to the idea of fixing the world one bug at a time,  I had to skip the opportunity this time around and work my way around what looks like a a new bug in Unison, since the problem seems to be with specifying a root directory of a drive in windows as the sync source I had to work around it by specifying all the sub directories instead of the root directory. In turn I had to bite the bullet and do some dos scripting (something I have not done since the late 80’s).
here is what my script iterating the top level directory of drive x and sending each to unison for synchronization:

for /d %%A in (X:\*) do Unison-Text.exe %%A ssh://osx//users/mousecradle/xp/%%~nxA -sshcmd plink-me.bat

plink-me.bat looks like this ( I think I’ve taken in from here):

@plink osx -i "osx.ppk" -l mousecradle -ssh unison -server unison -server -contactquietly

Contrary to impression that my rants here might create I think that unison is a tremendously useful piece of software and have been a relatively happy user for many years.
Hey it’s an open source, multi-platform, Unix flavored, ssh aware, text mode friendly file synchronization tool, I had to learn lisp for emacs (or rather emacs-lisp if I’d like to be precise) no way I’m learning O-Caml.

Posted in Command Line, open source, productivity tools, Technology | Tagged , , , , , , , , , , , , | Leave a comment

dtach and dvtm, Modern Tools Ancient Interface

Been pretty interesting in the past few weeks from a technology stand point,

Have been working on writing a custom Linux device driver, while this in it by itself should be interesting what was even more interesting is getting to know the tools and adjust them to my (one would think) particular taste.

Been wanting for a while to get to know a powerful text editor, I wasn’t looking for a cult in particular but accepted that it is more than likely be part of the deal.

well back when I was playing with the very first versions of MythTV (it must have been 2003) I got to know the advantages of remote accessing my Linux box which must have been Mandrake (recently I  learned a whole lot about the plant while wikipedia surfing, but this is another story all together) , I use to access my personal little hacked projects every chance I had and pretty quickly became a fan of ssh (by all means more useful than sliced bread) combined with ‘GNU screen’

(incidentally the reason I prefix screen with GNU is not because I subscribe to the idea where if something was compiled with GCC it should be prefixed with GNU rather that ‘screen’ is such a common word in the technology context that writing ‘GNU screens’ clarifies that one discusses the software rather than the hardware but this is most definitely for yet another discussion.)

to go with GNU I had to work with one of them text mode editors and for reasons I can not remember I started to work with Emacs more frequently, this was the very basic of all text editing, hack a perl script here and there to acquire xmltv or fix some issues with QT display widget nothing big, by all means not an extensive use.

Having started my new job in the last two years have had the chance to dive for the first time in my professional life in a mostly Linux environment, and I was pretty excited and decided to take the opportunity and finally deepen my knowledge of Emacs.

Religions aside I’m loving it,  the bloatware doesn’t bother me as disk space is never an issue now days and when learning new-trick-from-old-dogs most of the dependencies are already built in.  for the most part the learning curve is well worth it as I pick new uses to Emacs they tend to use a consistent key binding and design philosophy. but by all means the killer feature for me is the fact that it’s a full featured text editor (the running joke is that Emacs is a good operating system lacking a good text editor)  that runs completely in text mode, now I can get whatever runs under over and with Emacs accessible anywhere ssh and screen are available and this is just great for me.

Having learned that I quickly started to have a separate Emacs session each running inside a screen per project, picked up a few tricks from Emacs Fu and googled the rest. I’m fairly happy with my system going.

Got to the tweaking level where I like the 256 zenburn emacs color scheme going on after struggling a bit with putty, Ubuntu and Emacs to convince them all they do support them 256 colors. I’m also using the same .emacs both on my Ubuntu and my OSX after learning enough emacs-lisp to survive.

The thing I love the most about this setup is that I would leave work say in a middle of debugging an issue, drive home eat dinner or whatever and ssh to work type

screen -R session.drivers

and I’m right there were I left everything and it takes me a few seconds of recap in my head to continue from where I’ve left, this is just great.

however the quest for convenience and productivity never ends.

see at home I have this 24 inch dell 2408wfp, which by the way is a great non-GNU screen which I bought the second it was available on the  market,  actually I think I may have even pre-ordered. This means that at home I have way more non-GNU screen real estate and my ssh screen sessions could just be laid out side by side, or tiled if you will. but if they’re tiled anyhow why won’t I just let a tiled-windows-manager do the job for me? plus it would save me all these keyboard taps that I can later spend on say communicating all these very-particular thoughts.

looking around a bit I found dvtm which is tile-text-mode-window-manager: exactly what I was looking for.

so the plan is to have 4 windows tiled, each connected to a pre existing (and different)  gnu not unix screen  session,

so to give you an idea I’d like to have:

quarter of my screen dedicated to an Emacs session with my driver source code, toolchain etc.

quarter dedicated to my test application source code, compilation window, shell to run it etc.

another quarter for a serial port with log of the target etc.

and the last (and yes least) quarter to I don’t know something else (right now I have my dvtm cheat sheet there, but it is way too much real-estate for it going forward)

so I do that by creating four dvtm windows (this is by the way C-g c in the default binding which is not Emacs friendly *sigh*)

I then in the shell prompt I get in every dvtm ‘windows’ issue: ‘screen -x 1235.drivers’ to ” “Attach to a not detached screen. (Multi display mode)”, this is great as it allows me to have two ‘views’ to the same screen session, and then after repeating all this three times I have all my Emacs sessions tiled across my entire 24″ screen and if I put Putty to full screen mode it looks like I do rocket science in my spare time (I don’t, I’m just enough of a sucker be working on my spare time).

it’s true that screen is holding my Emacs sessions alive and dvtm is tiling them all nice for me but It’s awfully a lot of work to reconnect and C-g j between the windows and look for the right screen session to connect, I’d like to be able to do all that in one tap, well I’ll be willing to settle for typing one line of text to the command line ideally with just searching the shell’s history.

well why won’t we permanently run dvtm inside GNU not unix screen such that all I need to do is reconnect to a screen session that will run dvtm that will run 4 windows each running a screen session connected to an older screen session in sharing mode.

why won’t we indeed. having tried that I’ll tell you why not: because it’s messing up the background color in emacs, more specifically when I run emacs inside dvtm that runs inside screen I get only the text with the zenburn brown background all the blank space around it is as black as Unix can make it, this is no good(tm) after spending all these late night hours getting colors to work to my liking. see I would debug this if I had the slightest clue where to start, it’s quite weird because screen on it’s own runs Emacs and doesn’t screw up the background color, same goes for dvtm I can run Emacs inside dvtm and it shows the background color just fine. it gets even weirder: I can run Emacs inside screen inside dvtm and it will be OK, it’s just the combination of screen->dvtm->emacs that is no good. to iterate:

dvtm->screen->emacs OK

dvtm->emacs OK

screen->emacs OK

screen->dvtm->screen->emacs NOT OK.

(insert apology for making people dizzy)

so I figured I’ll try running dvtm inside tmux,: no go, exactly the same problem, down this thread they seems to experience the same problem (just with dvtm and man), looks like an issue with way the background color is refreshed or something.

then I found to solution dtach: running dvtm inside dtach and then running screen and Emacs inside the screen session did not screw up the background color in emacs.

Just do one thing well.

so who’s writing this M-x podcatcher-mode ?

Posted in remote, Technology | Tagged , , , , , , , , , | 4 Comments