Linux executable without main() Function | Write a C Executable program without main() function

Very Short version :

1) Create the program without main() function.

rajkumar.r@17:56:58:~/workspace/raj/workouts/ex_without_main$ cat exe_wo_main.c 
const char my_interp[] __attribute__((section(".interp"))) = "/lib/";
int fn_wo_main() {
	printf("This is a function without main\n");

2) Compile it as a Shared object

gcc -shared -Wl,-e,fn_wo_main exe_wo_main.c -o

3) Run it!!

rajkumar.r@17:57:36:~/workspace/raj/workouts/ex_without_main$ ./ 
This is a function without main

Longer Version with Explanation :

Generally, all of the C programs starts at main() function. This is a standard defined fro C programming. When we look at the internals of this, we can understand how the system actually works in the background. C is a compiled language, which means, the source code is converted into the executable, before the execution starts. The general C program compilation process goes through the following steps, which is well documented in several other websites.

C Source code
Preprocessor – Intermediate files
Compiler – Object files
Linker – Linked Object file [ Executable file ]
Loader – Loads the executable and executes.

For our goal, we have to understand the linking process. We write our code starting from main(). But before the main(), there are several things happens in the background. Like, setting up of the environment, fetching input, configuring console etc. These are beautifully abstracted by GCC and the host system which hides those process from the user.

Here, we will unravel a small step of the long and complex process. The entry point of the execution. When the source code is preprocessed and compiled, the object codes are generated. After this, the standard and user built object codes are linked to generate the executable. The linking is done by the linker script which directs how to create the binary executable. This is where, the entry point of the executable is defined and configured as main().

Before main(), there happens few configurations. One among them is, configuration of dynamic loader, which loads the required symbols at run time. Usually in Linux, it is /lib/, which inturn links to the corresponding loader provided by the compiler collection. In my case, it seems to be as below.

rajkumar.r@17:59:50:~/workspace/raj/workouts/ex_without_main$ ls -l /lib/ 
lrwxrwxrwx 1 root root 25 Jan 28  2013 /lib/ -> i386-linux-gnu/

GCC Compiler is a very powerful and sophisticated compiler. GCC also offers a lot of control to the developers. One among several wonderful option is, -Wl,option. According to gcc man page,

	   Pass option as an option to the linker.  If option contains commas, it is split
	   into multiple options at the commas.  You can use this syntax to pass an argument
	   to the option.  For example, -Wl,-Map, passes -Map to the
	   linker.  When using the GNU linker, you can also get the same effect with 

From ld manual @t,

The linker command language includes a command specifically for defining the first executable instruction in an output file (its entry point). Its argument is a symbol name:


Like symbol assignments, the ENTRY command may be placed either as an independent command in the command file, or among the section definitions within the SECTIONS command--whatever makes the most sense for your layout.

ENTRY is only one of several ways of choosing the entry point. You may indicate it in any of the following ways (shown in descending order of priority: methods higher in the list override methods lower down).

	the `-e' entry command-line option;
	the ENTRY(symbol) command in a linker control script;
	the value of the symbol start, if present;
	the address of the first byte of the .text section, if present;
	The address 0. 

For example, you can use these rules to generate an entry point with an assignment statement: if no symbol start is defined within your input files, you can simply define it, assigning it an appropriate value---

start = 0x2020;

The example shows an absolute address, but you can use any expression. For example, if your input object files use some other symbol-name convention for the entry point, you can just assign the value of whatever symbol contains the start address to start:

start = other_symbol ;

We are going to combine all of these above features to run a program without main function. The steps are..

1) Create the program without main() function.

rajkumar.r@17:56:58:~/workspace/raj/workouts/ex_without_main$ cat exe_wo_main.c 
const char my_interp[] __attribute__((section(".interp"))) = "/lib/";
int fn_wo_main() {
	printf("This is a function without main\n");

Here, you can see we have a strange thing at line number 2. This is to compensate the missing main() and adding the dynamic loader section. ūüôā

2) Compile it as a Shared object

gcc -shared -Wl,-e,fn_wo_main exe_wo_main.c -o

3) Run it!!

rajkumar.r@17:57:36:~/workspace/raj/workouts/ex_without_main$ ./ 
This is a function without main



Creating a Local Git repository in ubuntu

Agenda :
1) Create a remote repository in local file system
2) Create a workspace
3) Create a empty repository
4) Add Initial file
5) Add remote origin
6) Push the code to remote repo
7) Creation of new branch
8) Pushing the new branch
9) Viewing all branches and Changing branch
10) Merging development branch to Master branch

1) Creation of a local repository
We need to define a location where the git repository will reside.
For this, we will create a local git repo.
To start with the local git repositary setup, first, we have to create a local directory.
I chose the below location for this.

cd /;
sudo mkdir gitrepo;
sudo chmod 777 gitrepo;
cd gitrepo;
mkdir project_1;
cd project_1;
git init –bare;

2) Creation of workspace
To start working, you need a workspace. I chose the below location as workspace

cd ~/;
mkdir workspace;
cd workspace;
mkdir project_1;
cd project_1;

4) Creation of Empty Depositary
Initiate a empty repositary here.

git init;

4) Adding Initial files.
Add your required initial files to the repo.

echo “This repository is to learn creation of local git repos” >>README
git add README;
git commit -m”Initial Commit”

5) Adding Remote origin
Now, we have to specify, where we will store this repository.
Remember?? We have created a local repository at first?
Else, check 1 Creation of a local repository

Now, we are going to use this location as the remote location.

git remote add origin /gitrepo/project_1

6) Pushing the code to remote repo
Now, we can push the code to the remote repo we have added.

git push origin master

The end master indicates, we need to push to the branch called as master in the remote repo.

To test the setup so far.. i.e. to check whether our code is safe at remote repo, lets try the below.

cd ../
rm -rf project_1;

git clone /gitrepo/project_1;
cd project_1;

Now, we should be able to see out previously added file.

7) Creation of new branch
Now, we will create a branch for development..

In the workspace repo, we will create a local branch at first. Then we will push this branch to the remote repo.

git checkout -b devel

git status, should tell that you are in the new branch devel

8) Pushing the new branch to remote :
Once you have added your development code, to push the new branch to the remote repo, follow the below..

git add *;
git commit -m”Adding development code”
git push origin devel

This would push the branch to the remote repo..

9) Viewing all branches and Changing branch
To list all of branches in current repo, run

git branch -a

A start will be present to indicate the active branch.
Remote branches will be preceded with remotes/origin/ in their path.

To change to a particular path, run

git checkout -b branch_name

10) Merging development branch to Master branch

git checkout master
git pull
git merge devel

Stripping and stopping stripping of binaries in RPM Build.

From FedoraProject page :

Normally, if there are binary executables, then debugging symbols are 
stripped from the  normal binary packages and placed into a name-debug
subpackage. If this shouldn't  happen, you can disable the stripping 
and creation of this subpackage by putting this at the top of your 

%global _enable_debug_package 0
%global debug_package %{nil}
%global __os_install_post /usr/lib/rpm/brp-compress %{nil}

If you want to stop stripping any one single binary alone, then you can add this line in the spec file after make..

strip --strip-unneeded binary_name

What is Unified modelling language [UML] and diagrams?

Very very short story

  1. Business process modeling with use cases
  2. Class and object modeling
  3. Behavior modeling
  4. Component modeling
  5. Distribution and deployment modeling
  • User model view,
  • ¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† ‚ÄĘ Use case Diagram:
  • Structural model view,
  • ¬†¬† ¬†¬†¬† ¬†¬†¬† ¬†¬†¬† ‚Äʬ†¬† ¬†Class Diagrams:
  • ¬†¬† ¬†¬†¬† ¬†¬†¬† ¬†¬†¬† ‚Äʬ†¬† ¬†Object Diagrams:
  • Behavioural¬†model view,
  • ¬†¬† ¬†¬†¬† ¬†¬†¬† ¬†¬†¬† ‚Äʬ†¬† ¬†Sequence diagrams:
  • ¬†¬† ¬†¬†¬† ¬†¬†¬† ¬†¬†¬† ‚Äʬ†¬† ¬†Collaboration diagrams:
  • ¬†¬† ¬†¬†¬† ¬†¬†¬† ¬†¬†¬† ‚Äʬ†¬† ¬†State diagrams:
  • ¬†¬† ¬†¬†¬† ¬†¬†¬† ¬†¬†¬† ‚Äʬ†¬† ¬†Activity diagrams:
  • Implementation¬†model view
  • ¬†¬† ¬†¬†¬† ¬†¬†¬† ¬†¬†¬† ‚Äʬ†¬† ¬†Component diagrams:
  • Environment model view.
  • ¬†¬† ¬†¬†¬† ¬†¬†¬† ¬†¬†¬† ‚Äʬ†¬† ¬†Deployment diagrams:

Now, Long story..

Summary :

This post is from here.
If you’re an analyst, developer or architect, the chances are that you have heard of the UML
If you’re not already familiar of using OOA/D design methods such as the UML then there is a fair chance the pressure is on to utilize this standard as part of your analysis and  design process.

To those not familiar with OOAD, either the laymen or seasoned developer alike, the UML can be seen overly comprehensive and daunting to learn.

This article will attempt to provide a fast track introduction for those that need to learn the UML basics and to begin to start understanding UML, so that it can be incorporated into your development project.

  1. I shall begin by clarifying exactly what UML is and is not.
  2. Then question why we should use the UML at all.
  3. Then I will conclude part 1 of this article with a high-level tour of the UML modeling tool-set.
  4. In part 2 of this article I shall continue by applying the models and notation already discussed to a real life business problem with a working example.

What is the UML?
In 1997 the OMG (Object Management Group) developed the UML as a common
architectural framework for modeling object orientated systems & applications.
The UML is derived primarily from the strengths of three notations;

  • Booch OOD (Object-Oriented Design),
  • Rumbaugh OMT (Object Modeling Technique),
  • ¬†Jacobson OOSE (Object-Oriented Software Engineering).

The OMG described UML is a language representing unified best engineering
practices for

  • specifying,
  • visualizing,
  • constructing,and
  • documenting

the elements of business modeling, software and even non-software systems.

‚Äʬ†¬† ¬†Specification:
“what” is required of a system, and
“how” a system may be implemented.
It captures the all important
design, and
implementation decisions.
that need to be established during a system development lifecycle.

‚Äʬ†¬† ¬†Visualization:
allows the visualization of systems before they are implemented.
shapes with well defined semantics
communicate to a wider audience more succinctly
than a descriptive narrative and
more comprehensively
than what often can be represented by a programming language.

‚Äʬ†¬† ¬†Construction:
used to guide and craft the implementation of a complicated system.
its possible to generate OO source code from UML and vice versa.

‚Äʬ†¬† ¬†Documenting:
It can capture knowledge and documenting deliverables, such as
requirements documents,
functional specifications,
and test plans.
These are all critical in
measuring, and
a system throughout its life cycle.

These four are modeling applications of UML. Not be confused with a process.
There are many processes available which use the UML;
furthermore there are many tools available on the market that aid the UML and,
in some cases also facilitates the following of a particular process.

Therefore the UML is not:
‚ÄĘ A Process:
It is a modeling toolkit with its own notation and syntax.
A process goes further by
describing the steps you take when developing software,
which diagrams are produced and
in which order, who does what and so on.
The premise behind the UML is that it is process-independent,
but enables and facilitates further processes.
‚Äʬ†¬† ¬†Visual Programming Language:
It is a visual modeling language frm which programs can be derived
The notation behind UML modeling is comprised of,
a set of specialized shapes        :    used for the construction
of different kinds of diagrams
while the UML syntax specifies    :    how these shapes can be combined.

Therefore further to learning the basics of UML it is recommend that:
‚ÄĘ A process or methodology is adopted
‚ÄĘ A UML development tool is utilized

UML may be used to support a number of methodologies, such as the
Rational Unified Process.
Some methodologies are more suited to larger enterprise applications
with a large team of architects and developers.
While others are more appropriate for
a single person or small teams working on small embedded systems.

Similarly there are many UML development tools available, such as
Rational Rose (Rational Rose Corporation),
Enterprise Architect (Sparx Systems),
Describe (Embarcadero Technologies) and even
Microsoft Visio.

Why Use UML?
With many of the rapid application development (RAD) tools available
such as Delphi or Visual Basic,
developing an application is fairly easy.

But does this method result in a professional quality application?

Deborh Kurata (1998) states that if an application is to be of a professional
quality, it must:
‚ÄĘ meet the needs of the users
‚ÄĘ be robust
‚ÄĘ be maintainable
‚ÄĘ be documented

Many developers using RAD tools will believe it makes sense to develop an
application rapidly.Write a prototype, and then keep adding more code until
the application is complete.

There is however, a fundamental problem to this approach.
The resulting application will lack a well defined architecture
because it would not have been thought out properly.
This will not compromise fundamental object orientated principles and
result in
inefficient and
difficult to maintain code.

the use of UML,
an appropriate UML development tool, and
an applicable process or methodology,

the design and refining of the application is shifted
from the development phase to
an analysis and design phase.

Therefore reducing risks and providing a vehicle for testing the architecture
of a system before it is coding begins.

The analysis and design overhead will eventually pay dividends as the system
has been
user driven,
documented and
generate skeleton code,

that will be
object orientated and
promote re-use.

Sinan Si Alhir (1998) describes the UML as enabling:
the capturing,
and leveraging

and operational knowledge

to facilitate increasing value by

increasing quality,
reducing costs,
and reducing time-to-market

managing risks &
being proactive in regard to
ever-increasing change &

This is a fairly convincing statement in itself, Sinan states that the UML will
increase quality and
reduce development time
while being flexible enough to respond to changing requirements.

Furthermore, the use of UML will help;
‚ÄĘ The communication of the desired structure & behavior of a system between
stakeholders and
‚ÄĘ The visualization and control of a systems architecture
‚ÄĘ Promote a deeper understanding of
the system,
exposing opportunities for
and reuse.
‚ÄĘ Manage Risk.

So what are these models?
So what models are available,
what use are they and
how do they link together?

we need to consider the primary modeling purposes of UML.

These are:

  1. Business process modeling with use cases
  2. Class and object modeling
  3. Behavior modeling
  4. Component modeling
  5. Distribution and deployment modeling

Each model is designed to let
developers and
view a system from different perspectives and
with varying levels of abstraction.

Each diagram will fit somewhere into these five architectural views
representing a distinct problem solution space.

These can be described as the;

  1. user  model view,
  2. structural model view,
  3. behavioral model view,
  4. implementation model view and
  5. the environment model view.

The User Model View
The user model view encompasses the models which define a solution to a problem
as understood by the client or stakeholders.This view is often also referred to
as the use case or
scenario view.

‚Äʬ†¬† ¬†Use case Diagram:
These models depict
the functionality required by the system and
the interaction of users and other elements (known as actors)
with respect to the specific solution.

The Structural model View :
The structural view encompasses the models which provide the
structural dimensions and
properties of the modeled system.
This view is often also referred to as the static or logical view.

‚Äʬ†¬† ¬†Class Diagrams:
These models describe the static structure & contents of a system using
elements such as classes,
packages and
to display relationships such as
inheritance and
‚Äʬ†¬† ¬†Object Diagrams:
Depict a class or the static structure of a system
at a particular point in time

The Behavioral Model View:
This model describe the behavioral, dynamic features & methods of the modeled
system. This view is often also referred to as the
concurrent, or
collaborative view.
‚Äʬ†¬† ¬†Sequence diagrams:
Describe timing sequence of the objects over a vertical time dimension.
With interactions between objects depicted on a horizontal dimension.
‚Äʬ†¬† ¬†Collaboration diagrams:
Describe the interactions and relationships between
objects and
of a system organized in time and space.
Numbers are used to show the sequence of messages.
‚Äʬ†¬† ¬†State diagrams:
Describe the sequence, status conditions and appropriate responses or
actions to conditions during the life of the objects within the system.
‚Äʬ†¬† ¬†Activity diagrams:
Describe the methods, activities and resulting transitions after
completion of the elements as flows of processing within a system.

The Implementation Model View:
The implementation view combines the structural and behavioral dimensions
of the solutions realization or implementation.
This view is often also referred to as the component or development view.

‚Äʬ†¬† ¬†Component diagrams:
These depict the high level organization & dependencies of
source code components,
binary components and
executable components
and whether these components exist at compile, link or run time.

The Environment Model View :
These models describe both the structural and behavioral dimensions of the
domain or environment in which the solution in implemented.
This view is often also referred to as the deployment or physical view.

‚Äʬ†¬† ¬†Deployment diagrams:
This models depict & describe
environmental elements &
configuration of runtime processing components,
libraries and
objects that will reside on them.

How do the models fit together?
After a high-level tour of the architectural views and diagrams available,it is
important to remember once again that UML is a not a process, therefore there
is no right or wrong order in which these models should be constructed.

In practice, the only real pre-requisite to a model is a business process model,
use case or a use case diagram. From then on in, a method of refinement on each
model will often be used as many elements of the system will not become obvious
until it is modeled from a different perspective. Therefore the activities of
analysis (what are the objects?) and design (the allocation of behavior) will
be iterative and be a mutually complementary process.

We have taken a look at the origins and definition of the UML to provide a
simplistic understanding of what it is, and what the UML can offer us. We have
also examined how we can benefit from its use on our next development project
and briefly explored the architectural views and models available and how these
can link together. In the concluding part of this article I shall apply the
principles and models discussed and explored in this article to a real life
business problem and development solution using example UML models where

References :
Alhir, Sinan Si. The True Value of the Unified Modeling Language (UML)”. Distributed Computing Magazine. DC Corp. July 1998.
Alhir, Sinan Si. UML in a Nutshell. O’Reilly and Associates, Inc. 1998
Alhir, Sinan Si. Understanding the Unified Modelling Language (UML). Methods & Tools. Martinig & Associates. April 1998.
Kurata, Deborah. Develop a Professional Application. Visual Basic Programmer’s Journal . pp 83-86. March 1998.

Further Reading
UML Tutorial, Sparx Systems:
What is UML, Embarcadero Technologies:
Introduction to OMG’s Unified Modeling Language‚ĄĘ. OMG:
Understanding the Unified Modeling Language. Sinan Si Alhir:

Building Busybox to get root file system to Linux kernel

Previously, we have built linux kernel, Qemu and booted them using qemu.

But, in previous attempts, we loaded a simple hello world program in the kernel.

Now, we can try to create a further more improved linux system.

For that, we need to get a Root FileSystem.

Here is a explanation on what is a Root File System.

So, we need a set of utilities according to our need to run using the linux kernel.

To create all these utilities, it will take certain effort. To make it simple, we can use a project called,

BusyBox – The Swiss Army Knife of Embedded Linux

Busybox, is a set of utilities, compiled into a single executable and shown as multiple executable’s are available, by creating soft links with different names.

TODO : Here is a example of how it can be achieved.

Creating Busybox based Root FS.

Here is an overview of what we are going to do.

  1. Download the busybox source.
  2. Configure
  3. Build
  4. Create Root File System image
  5. Load using Qemu.

Here, its detailed.

1) Download the busybox source.

Busybox source can be downloaded from busybox FTP site, here.

mkdir src;
cd src;
wget -c
tar -xf busybox-1.21.0.tar.bz2
cd busybox-1.21.0;

2) Configure busybox

BusyBox has menu based configuration. Any ways, to make it simpler to configure, busybox provides a set of configurations, as below.

make target Description
help Show the complete list of make options
defconfig Enable a default (generic) configuration
allnoconfig Disable all applications (empty configuration)
allyesconfig Enable all applications (complete configuration)
allbareconfig Enable all applications, but no subfeatures
config Text-based configurator
menuconfig N-curses (menu-based) configurator
all Build the BusyBox binary and documentation (./docs)
install Build and make it ready to install at INSTALL PREFIX directory (./_install)
busybox Build the BusyBox binary
clean Clean the source tree
distclean Completely clean the source tree
sizes Emit the text/data sizes of the enabled applications

Out of these, we are going to use, defconfig.

export ARCH=arm;
export CROSS_COMPILE=arm-none-linux-gnueabi-;
make defconfig;

Now, we need to customize this busybox, so that we can run it along with linux kernel as individual.

Build busybox as a static library, as we wont have libc in the linux kernel we are booting.

This can be done by following.

make menuconfig
# Enable   Busybox Settings--->Build Options---> [*] Build BusyBox as a static binary (no shared libs)

3) Build busybox

Now, we can build the busybox, using make install command

make -j4 install

Now,after build, we would have got the busybox install directory at busyboz-1.21.0/_install/

[Update] Note : If the build fails with error as below, Source

networking/lib.a(inetd.o): In function `register_rpc':
inetd.c:(.text.register_rpc+0x2c): undefined reference to `pmap_unset'
inetd.c:(.text.register_rpc+0x42): undefined reference to `pmap_set'
networking/lib.a(inetd.o): In function `prepare_socket_fd':
inetd.c:(.text.prepare_socket_fd+0x52): undefined reference to
collect2: error: ld returned 1 exit status
make: *** [busybox_unstripped] Error 1

Disable RPC support as below

Via menuconfig:
 Networking Utilities  --->
   []   Support RPC services

4) Create Root Filesystem image.

[TODO] As we have seen, initramfs supports newc file format. We are going to create a cpio archive to hold the root filesystem.

$ mkdir ../init/
$ cd _install;
$ find . | cpio -o --format=newc > ../../init/rootfs.img
$ cd ../../init;
$ file rootfs.img
   ASCII cpio archive (SVR4 with no CRC)
$ cd ../

5) Booting using initramfs we have built.

Now, we have the kernel built previously, rootfs that we have built now.

Lets boot it using qemu

$ qemu-system-arm -M versatilepb -kernel zImage -initrd init/rootfs.img -append "root=/dev/ram rdinit=/bin/sh"

The system would have booted and the shell prompt would be shown.
You can try ls on this to find something like this..

/ # ls
bin    dev    linuxrc    root    sbin    usr
/ #

Voila!! Very basic utility is working!!

Building and Booting Linux using Qemu

Previously, we have Built and Booted U-Boot through Qemu.
Now, let us build and boot Linux using Qemu.
Get the latest kernel source from
I took Stable 3.9.3 as on writing.

mkdir original
mkdir src
cd original
wget -c
cd ../src
tar -xf ../original/linux-3.9.3.tar.xz

Let us define the enviroinment variables that kernel build uses..

export ARCH=arm
export CROSS_COMPILE=arm-none-eabi-

Now, Let us configure this kernel build for Versatile Express.
This config is available at


For list of available configs, you can further explore in arch/ directory

make vexpress_defconfig;

Now, We need to make few changes, to make this kernel usable for our
needs in latter times.For this, We can remov module support (for
simplicity) and enabled EABI support as a binary format (allowing also old ABI).
This is necessary to run software compiled with the CodeSourcery toolchain.

kernel Features ---> Use the ARM EABI to compile the kernel and  
Kernel Features ---> Allow old ABI binaries to run with this kernel 

make menuconfig

We are all set to build the kernel. Now run

make -j4 all

Here, -j4 informs the build to use 4 build Jobs == Number of cores in your machine.
It will take some time to build.

Meanwhile, let us see what is the root file system and why do we need one, and
how to boot the kernel with a simple program.

Here is a definition of Root File System from Linux Information project

The root filesystem is the filesystem that is contained on the same 
partition on which the root directory is located, and it is the 
filesystem on which all the other filesystems are mounted (i.e.,
logically attached to the system) as the system is booted up 
(i.e., started up). 

The exact contents of the root filesystem will vary according to the 
computer, but they will include the files that are necessary for 
booting the system and for bringing it up to such a state that
the other filesystems can be mounted as well as tools for fixing a 
broken system and for recovering lost files from backups. The 
contents will include the root directory together with a minimal set 
of subdirectories and files including /boot, /dev, /etc, /bin, /sbin 
and sometimes /tmp (for temporary files).

Hope the kernel would have been built and ready by now.
The build should have completed with a message like,

  OBJCOPY arch/arm/boot/zImage
  Kernel: arch/arm/boot/zImage is ready

The kernel will be available at


Now, we can try to boot the kernel using qemu as below.

qemu-system-arm -M vexpress-a9 -kernel arch/arm/boot/zImage -append\

-M vexpress-a9 : Emulate V Express Board
-kernel arch/arm/boot/zImage : Use this file as kernel
-append "console=tty1" console acts as the tty1
- generall Linux uses tty interface to display console messages

Here, you can read what is tty

But, now, the kernel will end up in panic telling something like,

VFS: Cannot open root device "(null)" or unknown block(0,0): error -6
Please append a correct "root=" boot option;
Kernel panic - not syncing: VFS: Unable to mount root fs on unknown 

Now, What we discussed about File system comes useful.
As the kernel message is telling, we are missing a root file system. To build
a complete set of root file system is a complex task [ Relatively ūüėõ ] So, we
are going to generate a simple file system.

Creating a Simple Filesystem :

create file test.c in the src directory with the below content.


void main() {
		printf("Hello World!\n");  

As the program looks, it will run continouesly as kernel expects the first
program to run forever. Compile this program, using cross compiler for Arm
running Linux
[ This is not same as bare metal toolchain. Bare metal toolchain is for ARM
which has no OS. i.e like arm-none-eabi-, which we have exported while
building kernel.]

arm-none-linux-gnueabi-gcc -static test.c -o test

This will Compile and creates an ELF, staticaly liked to all required code,
in a single binary. We need a Filesystem, but we have a binary file now.
So we need to generate Filesystem using some tool. Before that, we should know,
What is initramfs?

initramfs, as the name tells, its the Initial Ram File System. This is
introduced for Linux 2.6 kernel, before which initrd is being used.

From ubuntu Wiki

The key parts of initramfs are:

1) CPIO archive, so no filesystems at all are needed in kernel. 
   The archive is simply unpacked into a ram disk.
2) This unpacking happens before do_basic_setup is called. This means 
   that firmware files are available before in-kernel drivers load.
3) The userspace init is called instead of prepare_namespace. All 
   finding of the root device, and md setup happens in userspace.
4) An initramfs can be built into the kernel directly by adding it to 
   the  ELF archive under the section name .init.ramfs initramfs' can be
   stacked. Providing an initramfs to the kernel using the traditional
   initrd mechanisms causes it to be unpacked along side the initramfs'
   that are built into the kernel.
5) All magic naming of the root device goes away. Integrating udev into 
   the initramfs means that the exact same view of the /dev tree can be 
   used throughout the boot sequence. This should solve the majority of 
   the SATA failures that are seen where an install can succeed, but the
   initrd cannot boot.

This initramfs uses a format called newc. Now, to get the cpio archive,
initramfs from the binary, run the below command.

echo test | cpio -o --format=newc > rootfs

Now, we have the zImage kernel and rootfs – Initramfs. Let us load the kernel

qemu-system-arm -M vexpress-a9 -kernel linux-3.9.3/arch/arm/boot/zImage\
-initrd rootfs -append "root=/dev/ram rdinit=/test"


-initrd rootfs : Qemu option which tells, rootfs is the Filesystem 
binary image.

root=/dev/ram and 
rdinit=/test are kernel options passed to the kernel we load.

rdinit=/test tells the kernel to run "test" executable we built as init.

Now, we can see the “hello world” being printed.

Voila!! Done!!

7. TFTP Setup in ubuntu

Goal : To setup TFTP Server in ubuntu and test the same.

Installing TFTP Server in Ubuntu

$ sudo apt-get install tftp tftpd-hpa

Once the setup is done, you can view/edit configuration at


It should look something like,

$ cat /etc/default/tftpd-hpa

Starting and stoping the service: We can start and stop tftp service through the following commands.

service tftpd-hpa status
service tftpd-hpa stop
service tftpd-hpa start
service tftpd-hpa restart
service tftpd-hpa force-reload

Testing TFTP For testing, we are going to try to download a file from tftp server. For this, we need to copy some file to the location given at TFTP_DIRECTORY variable in /etc/default/tftpd-hpa I took a uImage file to the location.

$ cp uImage /var/lib/tftpboot/

Now, start the tftp client as below,

raj@raj-VirtualBox:~$ tftp localhost

To check the status, run status command

tftp> status
Connected to localhost.
Mode: netascii Verbose: off Tracing: off
Rexmt-interval: 5 seconds, Max-timeout: 25 seconds

Now, try to get the file you have copied to the TFTP_DIRECTORY

tftp> get uImage
Received 2169792 bytes in 0.3 seconds

Voila!! You are done.. TFTP is configured and working.