Posts

Book: Better Embedded System Software

Better Embedded System Software
Author: Philip Koopman
ISBN-13: 978-0-9844490-0-2
http://koopman.us/

This book was required for the graduate level Embedded Systems class I took in the Spring of 2011, though not strictly necessary to the course. The book covers an array of helpful details in nicely broken down chunks, which makes the book very easy for a reader to consume. The material is presented in a way which is easy to understand. I often found myself having completed a section which I had set to be a goal for my reading allotment, and figuring “why not continue on?”. I felt as though I learned or re-enforced something with every section.

Topics covered in the book range from design, to documentation, to coding practices, to validation. Each chapter is broken down into sections: Summary, Overview, Importance, Symptoms, Risks, Discussion, Summary Boxes, Pitfalls, and More Information. The sections help to build a structure for the reader to know what to expect as they read and later to find useful information as they return to find bits they read earlier.

I would recommend this book to all software engineers as a great resource that should be read at least once.

Compiling a Kernel Module for BeagleBone

Preface

This post was written as I attempted to create a kernel module for BeagleBone. The contents includes my troubleshooting efforts and a final resolution as to the method I found which worked, found at the end of the post.

Troubleshooting Phase

Here’s what I started with:
BeagleBone A6 (beaglebone:~$ uname -a
Linux beaglebone 3.2.28 #1 Sun Oct 21 15:51:05 CEST 2012 armv7l GNU/Linux
)
Ubuntu 11.04 via VMWare Player with the following installed:

  • git
  • gawk
  • subversion
  • texinfo (stand-in for makeinfo)
  • texi2html
  • chrpath

To get the Angstrom kernel source, I followed the instructions from Angstrom, plus some hints from the beaglebone specific page.

git clone git://github.com/Angstrom-distribution/setup-scripts.git
cd setup-scripts
MACHINE=beaglebone ./oebb.sh config beaglebone
MACHINE=beaglebone ./oebb.sh update
MACHINE=beaglebone ./oebb.sh bitbake virtual/kernel

On that last step, building the kernel, I ran into an issue:

NOTE: Error expanding variable buildhistory_get_imageinfo       | ETA:  00:04:56
NOTE: Error during finalise of /home/user/setup-scripts/sources/meta-openembedded/meta-initramfs/recipes-bsp/images/initramfs-kexecboot-klibc-image.bb
ERROR: Failure expanding variable buildhistory_get_imageinfo, expression was 	if [ "${@base_contains('BUILDHISTORY_FEATURES', 'image', '1', '0', d)}" = "0" ] ; then
		return
	fi

...
which triggered exception OSError: [Errno 12] Cannot allocate memory
ERROR: Command execution failed: Exited with 1
NOTE: Error expanding variable buildhistory_get_imageinfo
NOTE: Error during finalise of /home/user/setup-scripts/sources/meta-openembedded/meta-initramfs/recipes-bsp/images/initramfs-kexecboot-image.bb

I had started out with only 512MB RAM, being that I only have 2GB on the host machine. I tried upping it to 1GB for the virtual machine, but no dice. Last RAM modification attempt, 2 GB specified for the virtual machine, and yet the failure remained. In my searches, I came across someone who’d been working on a similar setup and found they’d used some additional tools I had not installed:

  • build-essential (turned out to already be installed, not of my doing)
  • python-psyco (did not resolve the problem)

One of the messages that came up during the config stage had said it preferred bash to dash, but that it was going to access it via /bin/sh, so if I really wanted it to be happy, I should remap the symbolic link, which I did. Another message that comes up during the bitbaking suggests to do ‘. /home/user/.oe/environment-angstromv2012.05’ and then run bitbake something (I chose nano) without using ./oebb.sh. So I gave that a try. It failed almost identically.

bitbake nano
Pseudo is not present but is required, building this first before the main build
NOTE: angstrom DOES NOT support libiconv because the eglibc provided iconv library is used
NOTE: angstrom DOES NOT support libiconv because the eglibc provided iconv library is used
NOTE: Error expanding variable do_populate_sdk#########         | ETA:  00:01:58
NOTE: Error during finalise of /home/user/setup-scripts/sources/openembedded-core/meta/recipes-core/meta/external-python-tarball.bb
ERROR: Failure expanding variable METADATA_REVISION, expression was ${@base_detect_revision(d)} which triggered exception OSError: [Errno 12] Cannot allocate memory
ERROR: Command execution failed: Exited with 1
NOTE: Error expanding variable toolchain_create_sdk_version
NOTE: Error during finalise of /home/user/setup-scripts/sources/openembedded-core/meta/recipes-core/meta/meta-toolchain-gmae.bb

Summary: There were 2 ERROR messages shown, returning a non-zero exit code.

So, I got a borrowed Ubuntu 11.04 machine, as the memory note really seemed important. I started with cloning the setup-scripts repo, configuring for beaglebone, and running the update. Then I stopped, and did the . /home/user/.oe/environment-angstromv2012.05 and then run bitbake nano steps as previously suggested. This time was much more positive! Note that I disregarded the request to change /bin/sh to point to bash as the computer is a loaner. However, quite some time later, nano finished building. I hadn’t imagined it would take so long, so I wasn’t watching the clock, so I can only say it took several hours.

Once nano built, I felt confident that I would finally be able to bitbake kernel/virtual, so I started it and let it run, which it did within a few hours. I went ahead and bitbaked systemd-image as well, before finally bitbaking virtual/kernel -c compile -f, as suggested in the similar setup post I mentioned earlier. Finally this created files where the aforementioned post suggests, which look to be what is needed (for me the location was: /home/user/setup-scripts/build/tmp-angstrom_v2012_05-eglibc/work/beaglebone-angstrom-linux-gnueabi/linux-ti33x-psp-3.2.28-r16b+gitr720e07b4c1f687b61b147b31c698cb6816d72f01/git).

Now we should have what is needed to cross compile a kernel module for BeagleBone on Ubuntu. However, things aren’t just going to start being simple, are they? I set up my extremely basic kernel module:

#include <linux/module.h>
#include <linux/kernel.h>

static int __init enable_usermode(void)
{
        printk(KERN_INFO "Usermode enabled.\n");
        return 0;
}

static void __exit disable_usermode(void)
{
        printk(KERN_INFO "Usermode disabled.\n");
}

module_init(enable_usermode);
module_exit(disable_usermode);

And my Makefile:

obj-m += enable_usermode.o

CROSS = /home/user/setup-scripts/build/tmp-angstrom_v2012_05-eglibc/sysroots/x86_64-linux/usr/bin/armv7a-angstrom-linux-gnueabi/arm-angstrom-linux-gnueabi-
KDIR = /home/user/setup-scripts/build/tmp-angstrom_v2012_05-eglibc/work/beaglebone-angstrom-linux-gnueabi/linux-ti33x-psp-3.2.28-r16b+gitr720e07b4c1f687b61b147b31c698cb6816d72f01/git

PWD := $(shell pwd)

all:
        make -C $(KDIR) M=$(PWD) CROSS_COMPILE=$(CROSS) modules
clean:
        make -C $(KDIR) M=$(PWD) CROSS_COMPILE=$(CROSS) clean

And ran make, which resulted in this output:

make -C /home/user/setup-scripts/build/tmp-angstrom_v2012_05-eglibc/work/beaglebone-angstrom-linux-gnueabi/linux-ti33x-psp-3.2.28-r16b+gitr720e07b4c1f687b61b147b31c698cb6816d72f01/git M=/home/user/LKM CROSS_COMPILE=/home/user/setup-scripts/build/tmp-angstrom_v2012_05-eglibc/sysroots/x86_64-linux/usr/bin/armv7a-angstrom-linux-gnueabi/arm-angstrom-linux-gnueabi- modules
make[1]: Entering directory `/home/user/setup-scripts/build/tmp-angstrom_v2012_05-eglibc/work/beaglebone-angstrom-linux-gnueabi/linux-ti33x-psp-3.2.28-r16b+gitr720e07b4c1f687b61b147b31c698cb6816d72f01/git'
  CC [M]  /home/user/LKM/enable_usermode.o
cc1: error: unrecognized command line option "-m64"
cc1: error: unrecognized command line option "-mno-red-zone"
cc1: error: unrecognized command line option "-mcmodel=kernel"
cc1: error: unrecognized command line option "-maccumulate-outgoing-args"
make[2]: *** [/home/user/LKM/enable_usermode.o] Error 1
make[1]: *** [_module_/home/user/LKM] Error 2
make[1]: Leaving directory `/home/user/setup-scripts/build/tmp-angstrom_v2012_05-eglibc/work/beaglebone-angstrom-linux-gnueabi/linux-ti33x-psp-3.2.28-r16b+gitr720e07b4c1f687b61b147b31c698cb6816d72f01/git'
make: *** [all] Error 2

This is disappointing. I figure that there may be some aspect to the way the bitbake setup is done that I’m causing problems with from my side, so I figure it’s time to pursue moving to the BeagleBone. (Note, there was a resolution to the above, found below related to using make ARCH=arm.)

Compiling kernel modules directly on BeagleBone

I started with this in parallel to the effort on Ubuntu, by trying to get the kernel headers using opkg install kernel-headers, however this gave errors:

$ opkg install kernel-headers
Installing kernel-headers (3.2.28-r16a+gitr720e07b4c1f687b61b147b31c698cb6816d72f01) to root...
Downloading http://feeds.angstrom-distribution.org/feeds/v2012.05/ipk/eglibc/armv7a/machine/beaglebone/kernel-headers_3.2.28-r16a+gitr720e07b4c1f687b61b147b31c698cb6816d72f01_beaglebone.ipk.
wget: server returned error: HTTP/1.1 404 Not Found
Collected errors:
* opkg_download: Failed to download http://feeds.angstrom-distribution.org/feeds/v2012.05/ipk/eglibc/armv7a/machine/beaglebone/kernel-headers_3.2.28-r16a+gitr720e07b4c1f687b61b147b31c698cb6816d72f01_beaglebone.ipk, wget returned 1.
* opkg_install_pkg: Failed to download kernel-headers. Perhaps you need to run 'opkg update'?
* opkg_install_cmd: Cannot install package kernel-headers.

So, I checked out http://feeds.angstrom-distribution.org/feeds/v2012.05/ipk/eglibc/armv7a/machine/beaglebone/, where sure enough the kernel-headers had been updated recently, so I ran opkg update as suggested. Unfortunately, using the same kernel module from earlier and the same Makefile with the KDIR changed to KDIR := /lib/modules/$(shell uname -r)/build (and no CROSS_COMPILE), the result of running make was a complaint:
make: *** /lib/modules/3.2.28/build: No such file or directory.  Stop.
Looking into the directory, of course there is no build folder. The only thing I had installed was kernel-headers, which apparently is insufficient for the task of building kernel modules directly on BeagleBone. More disappointingly, I noticed that the only kernel-headers version now available is 3.2.30, while my BeagleBone has Angstrom 3.2.28.

Next, I decided to get a better idea of why I was getting conflicting information about what to use for KDIR in the Makefile. It seems that the standard is to have a symlink /lib/modules/$(uname -r)/build which points to a folder named for your linux kernel in /usr/src. What’s key is that there should be a Makefile here, which determines how kernel modules are built. So, I began a painfully slow scp of the output of the build on Ubuntu, first compressing setup-scripts/build/tmp-angstrom_v2012_05-eglibc/work/beaglebone-angstrom-linux-gnueabi/linux-ti33x-psp-3.2.28-r16b+gitr720e07b4c1f687b61b147b31c698cb6816d72f01/git to hopefully make the ordeal a bit faster.

With the copy ongoing, I decided to give another shot at the cross compiling. I was looking in my Embedded Linux Primer book and realized that I’d forgotten about passing ARCH=arm when calling make. So I went back to my cross compile environment, and navigated to the folder with the Makefile and the basic kernel module code, and ran make ARCH=arm. It built! In a flurry of excitement, I copied it to my BeagleBone and tried to insmod, only to be greeted with: insmod: error inserting 'enable_usermode.ko': -1 Invalid module format

Finally the tar.gz finished copying, so I extracted it to ~/angstrom-build. As root, I symlinked this to a couple places:

ln -s /home/beaglebone/angstrom-build /usr/src/linux-3.2.28
ln -s /usr/src/linux-3.2.28 /lib/modules/3.2.28/build

Then I gave a quick try to building my kernel module, knowing it was likely I’d need to make modules and scripts, but wanting to see what might happen. The attempt yielded this error:
/bin/sh: scripts/basic/fixdep: cannot execute binary file
So I went back to ~/angstrom-build and ran

make modules
make scripts

This took a couple hours and make scripts didn’t seem to do anything. Once this was done, I was able to build the kernel module, however, trying to insert it gave the same issue as when I had tried to insert the cross compiled kernel module. So, I decided to check out what dmesg had to say:
enable_usermode: disagrees about version of symbol module_layout
Uh oh. It seems that even though the numerical version of the kernel matches, something else within the pre-built kernel distributed on the BeagleBone doesn’t match what I’ve built.

So I have two options; the long way being to try to setup the kernel I built on an SD card, or the shorter way, upgrading the kernel on my BeagleBone to see if maybe the kernel-dev package I installed previously can be used once the kernel is upgraded. I chose to try to upgrade the kernel using opkg upgrade. With this selection, I went ahead and did make scripts in /usr/src/kernel before changing the Makefile to use KDIR = /usr/src/kernel, and voila! This time the kernel module built and was successfully inserted!

Suggested course of action to compile a kernel module on BeagleBone

Note that I include directives to switch user (su) as you’ll need to be root in some cases. You may replace /home/user with ~, I typed it out because I noticed that the font made it look like a hyphen sometimes. The command exit will take you back to the previous user, or if you are in the original user who signed in, will disconnect you (so don’t use exit if you only have one user, root). Finally, dmesg will show the kernel prints, so you can look for the text from the example kernel module to be printed when you insert and remove the module.

opkg update
opkg --tmp-dir /home/user upgrade
*reboot if kernel upgraded using: shutdown -r now*
opkg install kernel-headers
opkg install kernel-dev
cd /usr/src/kernel
su
make scripts
exit
cd /home/user
mkdir mykernelmod
cd mykernelmod
*create the below described c file*
*create the below described Makefile*
make ARCH=arm
su
insmod mykernelmod.ko
dmesg
rmmod mykernelmod.ko
dmesg
exit

mykernelmod.c

#include <linux/module.h>
#include <linux/kernel.h>

static int __init enable_mod(void)
{
        printk(KERN_INFO "My Kernel Module is enabled.\n");
        return 0;
}

static void __exit disable_mod(void)
{
        printk(KERN_INFO "My Kernel Module is disabled.\n");
}

module_init(enable_mod);
module_exit(disable_mod);

Makefile

obj-m += mykernelmod.o
KDIR = /usr/src/kernel
PWD := $(shell pwd)

all:
        make -C $(KDIR) M=$(PWD) modules
clean:
        make -C $(KDIR) M=$(PWD) clean

Some useful links

Perl: useful tidbits from work

For a few years now, a large part of my job has depended on my writing and maintaining Perl scripts. I thought it would be wise, for my own sake, to note down some of the things I’ve learned along the way. I don’t claim these notes to contain optimal solutions and look forward to any comments they may spark.

Adding an “include” directory to a file

I wasn’t the one to come up with this exact format, and I feel like some of it is extra, however, it has worked and was proliferated throughout many files before I picked them up, so once I have the need to use it for myself, I’ll take the opportunity to figure out what’s really needed. It does show the flexibility of what can be added in the BEGIN block as is.

BEGIN
{
    use File::Spec::Functions 'catfile';
    use File::Basename 'dirname';
    push @INC, catfile(dirname($0), '../other_location/Perl_modules');
}

Generic Code, Project Specifics in a Hash

We had multiple projects which were very similar in what needed to be done via the Perl scripts. To keep the code from being hardcoded to a project, we used a Perl module Custom::Project, containing a hash with details about the products and their dependencies. For instance, one layout might be:

package Custom::Project;
...
  %Products = (
    SourceA_Product => {
      path        => "source_a",
      label_name  => "MYSOURCEA",
    },
    SourceB_Product => {
      path        => "source_b",
      label_name  => "MYSOURCEB",
    },
    Top_Product => {
      path        => "top",
      label_name  => "TOP",
      subproducts => ["SourceA_Product", "SourceB_Product"],
    },
  );

With access to this hash, if I want to apply a label, I can grab the information as long as I know what product I need the information for using something like $Custom::Project::Products{“SourceA_Product”}{label_name}. Or if I’d like to cycle through the subproducts of Top_Product, I can use the array directly by using @{$Custom::Project::Products{“Top_Product”}{subproducts}}

Precompile a Regex

Sometimes it can be useful to store a regex in a variable. In one instance I setup a regex in a variable because it had changed a couple times already, and it was part of a few steps of substitutions, so I wanted to pull out what I could. What I did was create a variable

my $to_be_replaced = qr/NUMBER_(\d)_REPLACEABLE/;

Then, later, in the substitution phase, I was able to do something like this:

$file_contents =~ s/$to_be_replaced/New_Name_$1/g;

Important here is that the number in the text to be replaced was captured and used in the replacement text.

Name both key and value when iterating over a hash

Instead of foreach and only naming a key, an alternative is to use

while ((my $key, my $value) = each %hash) {}

Passing values to subroutines

Previously, I had always used shift to pop the values passed to subroutines. There is an alternative which looks something like this

my ($value1, $value2, $value3) = @_;

Since Perl doesn’t require you to be strict about dictating what will be passed where, this makes a simple way to quickly write out the arguments passed through, though does require attention be paid when making changes. However, I feel it is more readable than using repetitive shifts.
A second point is not really specific to argument passing, but is useful to the cause. Obviously you cannot pass an array to a subroutine directly as the way the arguments are passed is fairly blatantly like an array based on my above note. Passing an array can be critical, though, and it’s not out of the question, it’s simply a matter of how you pass it. Instead of pushing the whole array in by passing it as @my_array, you can pass a reference to the array using \@my_array. Once the subroutine receives it, it will need to dereference what it receives, we’ll call it $my_array_ref in the subroutine, as such: @{$my_array_ref}. Note that this goes for returning information from a subroutine as well; you can return multiple parameters, and they can be references.

Execution Calls

There are a few different ways to execute something from within a Perl script. They all return slightly differently, meaning it’s important to choose the right one.

system($CMD); # Returns success or failure result
`$CMD`;       # Returns output of running command (the characters are backticks and this is effectively the same as qx/$CMD/; & qx{$CMD};)
exec($CMD);   # Does not return

I know there is also an option open(), but I’m not as familiar with it in terms of executing commands, instead of things like opening files. I will direct you to Perl HowTo if you wish for more detailed descriptions.

Saving a file with Unix newlines

This may not be an issue if you’re on a Unix or Linux machine, but if you’re on a Windows machine and the file you’re creating needs to have the Unix newline format, there’s an one extra step you can take to make it happen.

my $result = open(FILE, ">" . $filename) ? 0 : 1;
binmode FILE;

Parallel tasks

Sometimes it’s possible to perform tasks in parallel, and Perl has a few options for performing parallel tasks, but I’ve started to prefer threads (& threads::shared).

use threads;
use threads::shared;
...
my @threads;
foreach my $folder (@list)
{
  my $thr = threads->new(\&MyFunction, $arg1, $arg2, $arg3);
  push(@threads, $thr);
}
foreach (@threads)
{
  $result = $_->join;
  die "Failed with result $result\n" if ($result);
}
undef @threads;

I did at one point run across a problem due to the size of what was being built by several threads, but I was able to alleviate the problem by increase the stack size by replacing use threads; with:

use threads ('stack_size' => 4096 * 10);

Here 4096 is the page size, and I can’t recall if I ever determined the default stack size, but this size seemed to be sufficient.

Dialog Boxes

Although it’s nice to fully automate tasks, sometimes, there are things you cannot escape requiring the user to do. In one such case, I decided that command line prompting would not be sufficient, so I opted to present the user with a dialog box.

use Tk;
use Tk::DialogBox;
...
do {
  my $mw = MainWindow->new;
  $mw->withdraw();
  $dialog_response = $mw->messageBox(-title=>"My Dialog Box Title",
		  -message=>"Please answer Yes or No to my question here",
		  -type=>"YesNo"
		  );
} while ($dialog_response ne "Yes");

Books: Ruby on Rails and Design Patterns

In pursuit of my Master’s degree, I had planned to take a course called “Object Oriented Languages and Systems” during the summer of 2012. However, due to some unfortunate circumstances, I was unable to continue the course. Being that I have been more focused on embedded design, my last experiences with object oriented languages were not so recent. So, in an effort to ensure that I would be very prepared for the class, I took it upon myself to begin reading all four textbooks suggested for the class early to get a good head start. The four books were:

Object-Oriented Design Using Java [ISBN-13: 978-0072974164]
Head First Design Patterns [ISBN-13: 978-0596007126]
Programming Ruby 1.9: The Pragmatic Programmers’ Guide (Facets of Ruby) [ISBN-13: 978-1934356081]
Agile Web Development with Rails (Pragmatic Programmers) [ISBN-13: 978-1934356548]

I moved around between books a bit since I didn’t know what the sequence of the material in the class would be. I ended up reading the whole way through Object-Oriented Design Using Java first. This book covers a lot of material in its relatively few pages (by comparison to the other three books). There are a lot of points packed into each chapter, which from the details I knew about the course I was going to take, were highlighted by the course. However, I believe the tight correlation was probably due to the professor’s preference towards this particular book, not plain old coincidence. Regardless, the material was good and the presentation satisfactory. I didn’t feel out of my depth with the examples, even having not programmed in Java for some years. There were entire chapters devoted to examples, which I thought was helpful, though the last example chapter, if I recall correctly, was a bit tricky. I did find myself re-reading the book examples for some things, such as the trickier inheritance occurances (when using general versus specific type assignment), where class notes attempted to describe and fell a bit short.

My next read was the Programming Ruby book, as I found out that the beginning portion of the class would focus on Ruby on Rails. Ruby is a fascinating language. I find it to generally be uncomplicated, but there are also things you can write that could take a while to understand. Some new concepts to me were things like closures and mixins. Though I have yet to find a cool use for closures, I have this feeling that in the right scenario, they would be awesome. Mixins are also awesome in their own right. Mixins allow classes to have something that resembles multiple inheritance, but without some of the pains multiple inheritance can cause. Another thing about Ruby that I like is the “everything is an object” treatment. I know this can bite, but the options it opens up are pretty sweet. With regard to the book itself, I liked the organization of the book pretty well. There were only a few spots that I felt were lacking in explanation, but goodness, there is certainly a lot in the book already. Having read this book and a portion of the Agile Rails book, I felt I understood significantly more than those students I talked to (who had never touched Ruby before).

I interjected some reading of the Agile Web Development with Rails book once I heard that the first project for the class would use Ruby on Rails. I enjoyed following my way through this book, setting up a bookstore for their very own book company. Once I got several chapters in, they started to add fanciful things to their site, that I’m sure were nice, but with the impending project, I felt I needed to jump into things which were in later chapters. I have not investigated to see if there was some organization choosen, or even if it would have been within reason to skip any part. Instead I skipped around and tried to read the later chapters without implemnting as with the earlier chapters. Things seemed to work out okay by this method, though I figure at some point it would be good to continue the original path through the book. I do have to say, the amount of behind the scenes stuff that Rails does is both a blessing and a curse in that you both get so much for “free”, but so much of it is not obvious, you have to look or read to find it.

Lastly, Head First Design Patterns. This book is gigantic at over 600 pages, but don’t let that be daunting. The layout of the pages in this book is not like your typical book. The pages contain random amounts of information, with images and text smattered about in a generally pleasing way. There are questions with and without answers scattered about, and sometimes a crossword at the end of the chapter. This book is extremely example driven, which is great for someone like me. I also enjoyed the somewhat ADD style, which made it such that if you felt your attention might be moving elsewhere, it was probably to some other spot on the page instead of somewhere else entirely. The organization was well thought through, as they referenced design patterns from previous chapters to make points or comparisons fairly often. Having spent some time trying to understand Clang previously, when I got to the chapter on iterators and composition, all I could think of was “so Clang isn’t just intended to cause confusion, it must have taken from the composition pattern”. I’m not sure that simply knowing that would solve all the confusion with I experienced with Clang and trying to use it to generate my own LLVM, however, it certainly gave me some well needed perspective. I’d also like to point out that this book discusses MVC and its makeup in some detail with good descriptions and examples in one of the later chapters, which is a key part of how Ruby on Rails expects you to structure your code, so it could have been helpful to get this reading done first had I known this detail.

I hope that in the future I can spend some time to become a bit more familiar with Ruby (and Rails). I’ve also noticed some design patterns in code descriptions that colleagues have written up, though I’m not even sure of the prevalence of the concept in non-object oriented languages.