Tuesday, April 27, 2010

CVS: Update a Tagged set of files to Head

If you create a minimal set and tag it from a larger set of code in a module, you may want to preserve this set of code, but get later versions. This is useful for delivery when we create a minimal set to determine ONLY the files appropriate for sending to the customer, then using that same list for each subsequent delivery rather than going through the effort again.

First check out the old tag:

cvs co -r <tag_name> <module_name>

Then tell CVS to go inside that folder/module that you checked out and update files to HEAD...but ONLY those files, not the entire repository.

find -type f ! -path '*CVS*' | sort | xargs -n1 cvs update -A
Now if you want to retag just these files, use the instructions for "Tagging Individual Files" in an earlier post.

Sunday, April 25, 2010

CVS: Tagging Individual Files

In order to tag individual files for creating a minimal set for instance, delete all the extra files from the directory, but leave in the CVS directories. Then run the following command replacing GENERIC_TAG_NAME with the tag for CVS.
$ find -type f ! -path '*CVS*' | xargs -n1 cvs tag GENERIC_TAG_NAME
This will find all files, ignoring everything in the CVS folders, and then run the cvs tag command on them. If it fails to tag one file, it will continue on to the next.

CVS: Making a file Binary after its been Checked In

Here is the command to make a file type binary if it is not listed as binary in the repository:
$ cvs admin -kb <filename>
You can go to the cvswrappers file in the CVSROOT directory on the server and specify all the files that you want to be binary by default.

CVS: Fix Wrong Commit Message

Occasionally, you or a team member will commit code with the wrong commit message or with a blank message. Fixing this is dangerously easy.
$ cvs admin -m1.3:"my message will replace" FILENAME

Where 1.3 is the version number that had the incorrect commit message. This will replace the message previously given.

To do a multi-line commit message, just hit enter before putting the end quote to the comment and continue writing the comment. Close the quote and put the FILENAME when done and hit enter. Something like this:

$ cvs admin -m1.3:"my message
>will
>replace" FILENAME

The > on each new line is from Linux signifying a carriage return from the previous line, but that it is all still one command.

Saturday, April 24, 2010

CVS: Fixing a misnamed tag

If importing code and the wrong tag was used, before anyone gets messed up, do the following:

Create a new tag pointing to the old tag:
$ cvs rtag -r <Existing_WRONG_Tag> <New_Tag> <module>

Delete the old tag:
$ cvs rtag -d <Existing_WRONG_Tag> <module>

This is how you would rename any tag, not just one made on import of new code, but the cautionary note is that messing around with existing tags can be very dangerous. If the tag is already used in a release, then you have broken your traceability. If the tag is something developers have checked out already, there is now a breakdown in consistency of everyone's configuration. Use this with caution. Luckily, tags do not do any permanent damage to the code repository, but for traceability, try not to change them at will.

LINUX: Process information--priority and niceness

When doing Linux system administration, it can often become necessary to see what the "niceness" or priority of a certain process is. The concept of a process being "nice" has to do how it hogs system resources. We're not going to discuss the details of what priority and nice are here, hopefully if you are reading this you already know. The point though is that you can use the below command, especially the forest hierarchy, to debug problems. Just typing in "ps" doesn't give you this information natively, so you have to pass it a few options to get it. And when looking at the hierarchy, you can determine if a parent process has low priority or a high nice value. For example, if you look at the details for a GCC build and see that it is running with normal values, but it is taking way to long, looking at the forest hierarchy may reveal that the bash shell it is spawned under is running with low priority and high nice value, which will affect the overall performance.

To get process information showing priority and niceness:
ps -o pid,ni,pri,comm f -U <username>

-U specifies user. The "f" shows a ASCII art forest hierarchy.

LINUX: RPM packages

Installing an RPM is rather simple (if the dependencies aren't a problem). Simply type:
rpm -i <package>.rpm

However, you may want to see what scripts are going to be run prior to running it. Or even after, if you want to know where things were installed, you can get that information by typing:
rpm -q -i -l --scripts -p <packagename>.rpm
rpm -q -i -l --scripts <installed name>

The only difference between the above 2 commands is the first one is run against the rpm and can be used prior to installation, while the second one can be run on an installed package and does not need the original rpm.

LINUX: Setting the Date/Time

These instructions were written up for a Linux Fedora Core 3 system. I think it is handled differently on later versions. And ideally you should just point to a NTP server to make your life easier, but in the event that your company firewalls wont allow that, these instructions will help for a standalone system.

There are more complicated explanations on making sure the timezone is set properly and such, but to simply change the time/date, use the following to set the UTC time. Notice that UTC is different than the current time zone most likely, so calculate it accordingly. It is easy enough to just google "time UTC" and find out what it is at this moment.
$ date -u mmddhhmmyyyy.ss

Obviously mm is month, dd is day, hh is hour in 24-hour time, mm is minutes, yyyy is 4 digit year, and ss is seconds. Again, this is the UTC. Once you set that, typing date will show the current time/date based on your zone. To set the zone, you can link /etc/localtime to the right timezone file in /usr/share/zoneinfo by typing:
$ ln -s /usr/share/zoneinfo/EST /etc/localtime

If you have a /etc/localtime file there already, you will need to move it out of the way.

Friday, April 23, 2010

LINUX: Segfault and Core Dump

If you are encountering a segfault while running applications and want to see the core dump to find out what is wrong, you need to set the core file size to unlimited. Do that by typing the following at a command prompt in linux shell
$ ulimit -c unlimited
Closing your terminal session will end this ulimit command.

LINUX: If-Else Shell Command (OR operator)

A useful command is the || operator that can be used on the linux command line. It is similar to saying "if not, then do". An example of usage would be if you wanted to put something in a script that would launch a tool, say cervisia for example, but you want to make sure that if it is already running, another instance isn't launched, you could do the following.
ps ux | grep -q cervisi[a] || cervisia . &

Now, there is an interesting thing being done with grep. In order to prevent grep from finding itself in the running processes list, use a regular expression for the last character of the process name you are looking for. Hence, the "a" in cervisia is put in brackets [] to make it a regular expression and prevent the duplication of showing the grep command in the results. Note that the two commands above, ps and grep, can be accomplished with the more efficient pgrep command, which is the combination of the two commands already part of the linux shell.

PIBS: Initial setup of PIBS boot loader

We use an IBM (or now AMCC) 440GX Ocotea Eval Board. The environment we have is multiple 440GX boards connect to a host machine via a switch that acts as a private LAN. The eval boards never touch the network. The host machine has to have 2 Ethernet Cards (we used a USB adapter version to get a second NIC). The first connects to the network (on the domain), while the second connects the host machine to the private LAN switch. Thus, you can have users Remote Desktop into the Host machine to work on the boards, without ever having to physically be in the lab. If you have a remote reset capability enabled, they can do everything remotely, and putting the boards together like this allows for sharing of resources--meaning cost savings since you will not need a one-to-one of boards to developers.



There is some setup that needs to be done first. To get the boards to work right. We are assuming you have the connections as shown above and that you have a kernel sitting on the host machine (with TFTP enabled) so that the boards can boot from the host machine.



Setup the configuration for the first ethernet port eth0. Here in this example, the board is assigned static IP address 192.168.0.13. Change this to whichever IP you want for your network.
PIBS $ set ifconfigcmd0=ent0 192.168.0.13 netmask 255.255.255.0 up

Assign the location of the kernel on the host machine so that it can be grabbed through the TFTP server.
PIBS $ set bootfilename=C:\tftpboot_gx\integrityappmono3.bin

Set the address of the host machine where the TFTP server is located and the kernel is saved.
PIBS $ set ipdstaddr0=192.168.0.1

Tell the board to use ethernet to get the kernel to boot from on reset/power on.
PIBS $ set autoboot=eth

Sometimes it takes about 3 - 5 seconds to run the ifconfigcmd to bring up the eth0 device. Set the delay to 5 or more seconds; feel this part out since some boards take longer than others for some reason. It seems that the newer AMCC Ocotea boards are slower to bring up the eth0 than the older IBM Ocotea boards. 7 seconds is recommended.
PIBS $ set autobootdelay=7

Thursday, April 22, 2010

GHS: Mount a Remote NFS share

A little background to understand this post. We use Green Hills Software (GHS) Multi to compile code for GHS Integrity, their Real Time Operating System (RTOS) that can be used in embedded systems. If you want to know more about Green Hills Software, you can go to their site (www.ghs.com), but here I am just going to be sharing some gotchas that I've gone through with GHS.

At one point, the amount of files that needed to be ftp'd to the 440GX eval board (simulates target hardware when unavailable) to the local filesystem for use by the application software exceeded the allowable space on the FFS. Therefore, we had to create an NFS share on the host Windows server and mount it when loading the kernel. The downside: The address of the NFS share is hard-coded into the kernel and you need to make 1 kernel per board. The upside: It doesn't effect the design of the waveform or test software. It is transparent. Also, it is only limited by the size of the server harddrive.

The first part is to start a project in Multi to create a kernel for the IBM 440GX. Include a File System. When it comes up, you will see an ivfserver_module.gpj. It is probably a good idea to go make a copy of this and put a local version in the diercotry where this kernel project is so that you dont directly change the GHS standard files. You will have to do this for each file directly under the gpj as well.

Next, modify your customized copy of ffs_mountable.c and change the section at the bottom to look like:

vfs_MountEntry vfs_MountTable[] = {
{
"192.168.0.1:/Board1",
/* "192.168.0.1:/Board2", */
/* "192.168.0.1:/Board3", */
/* "192.168.0.1:/Board4", */
"/",
MOUNT_NFS,
0,
MNTTAB_MAKEMP,
0
},
{NULL, NULL, NULL, NULL, 0, 0} /* Must end with NULL/0 entry */
};

This hardcodes a NFS share into the kernel, so you need a seperate kernel for each board. The example above shows a kernel for Board1 which mounts a NFS share from the Host (192.168.0.1) called Board1. You can compile at this time, but do not load the kernel till you have created the specified NFS share on the host.

To create the share, use the Server for NFS built into Windows 2003 Server. You have to install it by going to Add/Remove Programs, selecting the Add additional components, and selecting it from there. You will need to put in the installation CD.

Wednesday, April 21, 2010

LINUX: Download / Copy Entire Websites

There is a Linux command, wget, that allows for getting webpages. Sometimes using wget in recursive mode will not allow you to get more than one page. In order to get a whole site, first edit the /etc/wgetrc file to turn robot=off, then use the following command:
wget --no-parent --wait=20 --limit-rate=20K -r -p -U Mozilla http://mxr.mozilla.org/mozilla/source/webtools/bonsai/index.html

What this will do is tell the receiving server that we are using a Mozilla browser (not a script), and will wait in between each fetch to simulate a human user. The no-parent switch will prevent it from following a bunch of links and going all over the place.

NOTE: this should only be used to obtain something you are allowed to obtain.

LINUX: Copy Files With Structure

Copying select files from one folder to another while preserving the files in same structure/hierarchy can be done simply within the find command.

If you have files in directory trunk that need to be copied to another folder called branch that already has the same structure, first make sure trunk and branch are at the same level, then cd into trunk. Next run the below command to take all files of .mp* format (that is for example, you can do something else) from one to the other.

find -type f -iname '*.mp*' -exec cp -p {} ../branch/{} \;

If the structure is NOT the same, you will need to create the appropriate structure FIRST. Do the following from within the trunk directory:

find -type d -exec mkdir -p ../branch/{} \;

LINUX: Checking total size of directories

To find the total usage per user for their home directory without finding info on each subfolder, use the du command specifying the max-depth to be = 1 so that subdirectories are not shown on the screen, but they are still calculated in the total.
$ du -h --max-depth=1

Then to sort this list, redirect it to a file such as mysize and run the following to get an ascending list of offenders in the hundreds of MB:
$ du -h --max-depth=1 > mysize

$ grep -e [0-9][0-9][0-9]M mysize | sort

And of course run a simple grep G mysize to see gigabyte offenders.

Changing Line Endings with VI

dos2unix is an easy and popular way to change a file from DOS line endings to Unix line endings. But it isn't foolproof...

When dos2unix doesn't seem to get all the line endings, within VI or VIM or GVIM, try:
:g/^M/s///g

where ^M means ctrl-V ctrl-M

or try
:set fileformat=unix

Note, for those who don't know how to use VI, there are tons or resources online, but a quick crash course (maybe I'll do more in another post): Typing vi, vim, or gvim will typically launch a vi-like editor. They all work by typing a colon (:) to do special commands like save, quit, and much more. To begin typing, you have to tell it to go into insert mode by simply typing "i" for insert (or "a" for append to end of line or "o" for new line then insert). It seems complicated, and it is, but you get used to it and then it is super powerful. Use :x to save and quit when done.

Changing permissions in Linux

Have you ever needed to change permissions on directories (or files) in Linux, but sometimes need to only do it for a certain set? There are a bunch of different tricks to get the job done.

Take as a simple example, wanting to change permissions on all directories so that any future files created under them will have this permission. If you wanted to preserve the files as is, but put what is called a sticky bit onto the directories for all future files, do the following:
$ find -type d | xargs chmod g+s

The left side of the pipe lists all directories from where you are. The right side of the pipe will set the group bit permissions on all dirs in the find list on the left side of the pipe. The group bit will force all files created in those directories from now on to be owned by the user creating it and by the group of the parent directory.

How about if you wanted to find out which file/folders are NOT owned by a certain group (perhaps to change their ownership or permissions)?
find ! -group <name_of_group>

This will list all files/folders that are NOT owned by the group mentioned. You can then put that into a pipe to change ownership or permissions as shown above.

Tuesday, April 20, 2010

Search & Replace using Perl command

Searches for all files recursively that contain <string2replace> (put any string you want here, this is just for my example) and prints out a list of just the filenames.
The xargs command then passes that list to perl. Our friend perl then does -p to run script on every line of the file, -i to edit in place, -e run the following script (input from command line), 's/???/!!!/g' replaces ??? with !!! and /g says to do it for multiple occurrences on the same line. Use the first command without -i and piped to less to just see an output of all the changes it will make if using -i just to check first. The -i does an inline replace and that is permanent (shows nothing on screen). Without the -i it sends the expected results to standard out (shows it on the screen) but doesn't actually do the change. Think of it as preview-mode.

First test it to see if it looks good.
$ grep -lr <string2replace> . | xargs -n1 perl -p -e 's/<string2replace>/<replacement>/g' | less

Do it for real.

$ grep -lr <string2replace> . | xargs -n1 perl -p -i -e 's/<string2replace>/<replacement>/g'

Using special characters in a blog or XML files

Well, I just had to learn this one for the last post. Certain characters are reserved for HTML and if you place them in your post it will not display properly. Technically speaking, this has to do with encoding standards and such, but that is relevant to what we are doing.

I've come across this problem when automatically generating XML files as well. We have a test harness at work called CPPUNIT that has an option to output results to an xml file. This ASSERTS or error messages are whatever the developer defines in the code, but they often won't think about how that affects XML. XML has these special character restrictions as well (just like HTML for this blog) and if it finds any of them in the XML file, the file becomes invalid. We had to do a search and replace of these special characters in order to ensure our XML files don't become invalid.

Anyway, on to the answer. If you need to place <, >, &, ", or ', then use the following table. Just type in the weird character string with semi-colon at end wherever you want the special character to appear in your blog post or XML file.
< ....... &lt;
> ....... &gt;
& ....... &amp;
" .......  &quot;
' .......   &#39;

There are much more for HTML, but the above are what I find to be the most common and all that you need.

How do I SCP files between machines?

To transfer a file to another machine:
scp <local_filename> <username>@<destination_server>:<remote_location>

Reverse the above if you want to transfer a file from another machine.

EXT3 Error

Have you ever had a Linux server lock up and looking at the output directly on the local monitor will just continuously report the following?
EXT3-fs error (device md0) in start_transaction: Journal has aborted
One possible solution is to type the following:
$ echo 1 > /proc/sys/kernel/sysrq
$ echo b > /proc/sysrq-trigger