Showing posts with label Linux Tricks. Show all posts
Showing posts with label Linux Tricks. Show all posts

Linux Tricks & Hacks

1. Runnig top command in batch mode

Top is a very useful command we are using while working with linux for monitoring the utilization of our system.It is invoked from the command line and it works by displaying lots of useful information, including CPU and memory usage, the number of running processes, load, the top resource hitters, and other useful bits. By default, top refreshes its report every 3 seconds.
Most of us use top in this fashion; we run it inside the terminal, look on the statistics for a few seconds and then graciously quit and continue our work.
But what if you wanted to monitor the usage of your system resources unattended? In other words, let some system administration utility run and collect system information and write it to a log file every once in a while. Better yet, what if you wanted to run such a utility only for a given period of time, again without any user interaction?
There are many possible answers:
  • You could schedule a job via cron.
  • You could run a shell script that runs ps every X seconds or so in a loop, incrementing a counter until the desired number of interactions elapsed. But you would also need uptime to check the load and several other commands to monitor disk utilization and what not.
Instead of going wild about trying to patch a script, there's a much, much simpler solution: top in batch mode. 
top can be run non-interactively, in batch mode. Time delay and the number of iterations can be configured, giving you the ability to dictate the data collection as you see fit. Here's an example:

top -b -d 10 -n 3 >> top-file

We have top running in batch mode (-b). It's going to refresh every 10 seconds, as specified by the delay (-d) flag, for a total count of 3 iterations (-n). The output will be sent to a file. A few screenshots:
And that does the trick. Speaking of writing to files ...

2. Write to more than one file at once with tee

In general, with static data, this is not a problem. You simply repeat the write operation. With dynamic data, again, this is not that much of a problem. You capture the output into a temporary variable and then write it to a number of files. But there's an easier and faster way of doing it, without redirection and repetitive write operations. The answer: tee.

tee is a very useful utility that duplicates pipe content. Now, what makes tee really useful is that it can append data to existing files, making it ideal for writing periodic log information to multiple files at once.

Here's a great example:
ps | tee file1 file2 file3
That's it! We're sending the output of the ps command to three different files! Or as many as we want. As you can see in the screenshots below, all three files were created at the same time and they all contain the same data. This is extremely useful for constantly changing output, which you must preserve in multiple instances without typing the same commands over and over like a keyboard-loving monkey.

Now, if you wanted to append data to files, that is periodically update them, you would use the -a flag, like this:
ps | tee -a file1 file2 file3 file4

3. Unleash the accounting power with pacct

Did you know that you can log the completion of every single process running on your machine? You may even want to do this, for security, statistical purposes, load optimization, or any other administrative reason you may think of. By default, process accounting (pacct) may not be activated on your machine. You might have to start it:
/usr/sbin/accton /var/account/pacct
Once this is done, every single process will be logged. You can find the logs under/var/account. The log itself is in binary form, so you will have to use a dumping utility to convert it to human-readable form. To this end, you use the dump-acct utility.
dump-acct pacct
The output may be very long, depending on the activity on your machine and whether you rotate the logs, which you should, since the accounting logs can inflate very quickly.
And there you go, the list of all processes ran on our host since the moment we activated the accounting. The output is printed in nice columns and includes the following, from left to right: process name, user time, system time, effective time, UID, GID, memory, and date. Other ways of starting accounting may be in the following forms:
/etc/init.d/psacct start
Or:
/etc/init.d/acct start
In fact, starting accounting using the init script is the preferred way of doing things. However, you should note that accounting is not a service in the typical form. The init script does not look for a running process - it merely checks for the lock file under /var. Therefore, if you turn the accounting on/off using the accton command, the init scripts won't be aware of this and may report false results.
BTW, turning accounting off with accton is done just like that:
/usr/sbin/accton
When no file is specified, the accounting is turned off. When the command is run against a file, as we've demonstrated earlier, the accounting process is started. You should be careful when activating/deactivating the accounting and stick to one method of management, either via the accton command or using the init scripts.

4. Dump utmp and wtmp logs

Like pacct, you can also dump the contents of the utmp and wtmp files. Both these files provide login records for the host. This information may be critical, especially if applications rely on the proper output of these files to function.
Being able to analyze the records gives you the power to examine your systems in and out. Furthermore, it may help you diagnose problems with logins, for example, via VNC or ssh, non-console and console login attempts, and more.
You can dump the logs using the dump-utmp utility. There is no dump-wtmp utility; the former works for both.

You can also do the following:
dump-utmp /var/log/wtmp
Here's what the sample file looks like:

5. Monitor CPU and disk usage with iostat

Would you like to know how your hard disks behave? Or how well does your CPU churn?iostat is a utility that reports statistics for CPU and I/O devices on your system. It can help you identify bottlenecks and mis-tuned kernel parameters, allowing you to boost the performance of your machine.
On some systems, the utility will be installed by default. Ubuntu 9.04, for example, requires that you install sysstat package, which, by the way, contains several more goodies that we will soon review:
Then, we can start monitoring the performance. I will not go into details what each little bit of displayed information means, but I will focus on one item: the first output reported by the utility is the average statistics since the last reboot.
Here's a sample run of iostat:
iostat -x 10 10
The utility runs 10 times, every 10 seconds, reporting extended (-x) statistics. Here's what the sample output to terminal looks like:

6. Monitor memory usage with vmstat

vmstat does the similar job, except it works with the virtual memory statistics. For Windows users, please note the term virtual does not refer to the pagefile, i.e. swap. It refers to the logical abstraction of memory in kernel, which is then translated into physical addresses.
vmstat reports information about processes, memory, paging, block IO, traps, and CPU activity. Again, it is very handy for detecting problems with system performance. Here's a sample run of vmstat:
vmstat -x 10 10
The utility runs 10 times, reporting every 1 second. For example, we can see that out system has taken some swap, but it's not doing anything much with it, there's approx. 35MB free memory and there's very little I/O activity, as there are no blocked processes. The CPU utilization spikes from just a few percents to almost 90% before calming down.
Nothing specially exciting, but in critical situations, this kind of information can be critical.

7. Combine the power of iostat and vmstat with dstat

dstat aims to replace vmstat, iostat and ifstat combined. It also offers exporting data into .csv files that can then be analyzed using spreadsheet software. dstat uses a pleasant color output in the terminal:
Plus you can make really nice graphs. The spike in the graph comes from opening the Firefox browser, for instance.

8. Collect, report or save system activity information with sar

sar is another powerful, versatile system. It is a sort of a jack o' all trades when it comes to monitoring and logging system activity. sar can be very useful for trying to analyze strange system problems where normal logs like boot.msg, messages or secure under /var/log do not yield too much information. sar writes the daily statistics into log files under /var/log/sa. Like we did before, we can monitor CPU utilization, every 2 seconds, 10 times:
sar -u 2 10
Or you may want to monitor disk activity (10 iterations, every 5 seconds):
sar -d 5 10
Now for some really cool stuff ...

9. Create UDP server-client - version 1

Here's something radical: create a small UDP server that listens on a port. Then configure a client to send information to the server. All this without root access!

Configure server with netcat

netcat is an incredibly powerful utility that can do just about anything with TCP or UDP connections. It can open connections, listen on ports, scan ports, and much more, all this with both IPv4 and IPv6.
In our example, we will use it to create a small UDP server on one of the non-service ports. This means we won't need root access to get it going.
netcat -l -u -p 42000
Here's what we did:
-l tells netcat to listen, -u tells it to use UDP, -p specifies the port (42000).
We can indeed verify with netstat:
netstat -tulpen | grep 42000
And we have an open port:

Configure client

Now we need to configure the client. The big question is how to tell our process to send data to a remote machine, to a UDP port? The answer is quite simple: open a file descriptor that points to the remote server. Here's the actual BASH script that we will use to test our connection:
The most interesting bit is the line that starts with exec.
exec 104<> /dev/udp/192.168.1.143/$1
We created a file descriptor 104 that points to our server. Now, it is possible that the file descriptor number 104 might already be in use, so you may want to check first with lsof or randomize the choice of the descriptor. Furthermore, if you have a name resolution mechanism in place, you can use a hostname instead of an IP. If you wanted to use a TCP connection, you would use /dev/tcp.
The choice of the port is defined by the $1 variable, passed as a command-line argument. You can hard code it - or make everything configurable by the user at runtime. The rest of the code is unimportant; we do something and then send information to our file descriptor, without really caring what it is. Again, we need no root access to do this.

Test connection

Now, we can see the server-client connection in action. Our server is a Ubuntu 8.10machine, while our client is a Fedora 11. We ran the script on the client:
And watch the command-line on the server:
To make it even more exciting, I've created a small Flash demo with Wink. You are welcome to play the file, if you're interested:

Cool, eh?

10. Configure UDP server-client - version 2

The limitation with the exercise above is that we do not control over some of the finer aspects of our connection. Furthermore, the connection is limited to a single end-point. If one client connects, others will be refused. To make things more exciting, we can improve our server. Instead of using netcat, we will write one of our own - in Perl.
Perl is a powerful programming language, very flexible, very neat. I must admin I have only recently began dabbling in it, so do not expect any miracles, but here's one way of creating a UDP server in Perl - there are tons of other implementations available, better, smarter, faster, and more elegant.
The code is very simple. First, let's take a look at the entire file and then examine sections of code. Here it is:
#!/usr/bin/perl

use IO::Socket;

$server = IO::Socket::INET->new(LocalPort => '50060',
                                Proto => "udp")
or die "Could not create UDP server on port
$server_port : $@n";

my $datagram;
my $MAXSIZE = 16384; #buffer size

while (my $data=$server->recv($datagram,$MAXSIZE))
{
    print $datagram;

    my $logdate=`date +"%m-%d-%H:%M:%S"`;
    chomp($logdate);

    my $filename="file.$logdate";
    open(FD,">","$filename");
    print FD $datagram;
    close(FD);
}

close($server);
The code begins with the standard Perl declaration. If you want extra debugging, you can add the -w flag. If you want to use strict code, then you may also want to add use strict;declaration. I warmly recommend this.
The next important bit is this one:
use IO::Socket;
This one tells Perl to use the IO::Socket object interface. You can also use IO:Socket::INET specifically for domain sockets. For more information, please check the official Perl documentation.
The next bit is the creation of the socket, i.e. server:
$server = IO::Socket::INET->new(LocalPort => '50060',
                                Proto => "udp")
or die "Could not create UDP server on port
$server_port : $@n";
We are trying to open the local UDP port 50060. If this cannot be done, the script will die with a rather descriptive message.
Next, we define a variable that will take incoming data (datagram) and the buffer size. The buffer size might be limited by the network implementation or network restrictions on your router/switch or the kernel itself, so some values might not work for you.
And then, we have the server doing some hard work. It prints the data to the screen. But it also creates a log file with a time stamp and prints the data to the file as well.
The beauty of this implementation is that the server permits multiple incoming connections. Of course, you will have to decide how you want to differentiate the data sent by different clients, whether by a message header or using additional IO:Socket:INET objects like PeerAddr.
On the client side, nothing changes.

Conclusion

That's it for now. This crazy collection should help you impress friends evoke a smile with your peers or even your boss and help you be more detailed and productive when it comes to system administration tasks. Some of the utilities and tricks presented here are tremendously useful.
If you're wondering what distribution you may need to be running to get these things done, don't worry. You can get them working on all distros. Throughout this document, I demonstrated using Ubuntu 8.10, Ubuntu 9.04 and Fedora 11. Debian-based or RedHat-based, there's something for everyone.

10 essential tricks for admins


The best systems administrators are set apart by their efficiency. And if an efficient systems administrator can do a task in 10 minutes that would take another mortal two hours to complete, then the efficient systems administrator should be rewarded (paid more) because the company is saving time, and time is money, right?
The trick is to prove your efficiency to management. While I won't attempt to cover that trick in this article, I will give you 10 essential tricks from the Blogforadmin's bag of tricks. These tips will save you time,and even if you don't get paid more money to be more efficient, you'll at least have more time to play ...hureeeyyyyyy! ! ! .5G8BAJQHJAR4
The newbie states that when he pushes the Eject button on the DVD drive of a server running a certain Redmond-based operating system, it will eject immediately. He then complains that, in most enterprise Linux servers, if a process is running in that directory, then the ejection won't happen. For too long as a Linux administrator, I would reboot the machine and get my disk on the bounce if I couldn't figure out what was running and why it wouldn't release the DVD drive. But this is ineffective.
Here's how you find the process that holds your DVD drive and eject it to your heart's content: First, simulate it. Stick a disk in your DVD drive, open up a terminal, and mount the DVD drive:
# mount /media/cdrom
# cd /media/cdrom# while [ 1 ]; do echo "All your drives are belong to us!"; sleep 30; done

Now open up a second terminal and try to eject the DVD drive:
# eject
You'll get a message like:
umount: /media/cdrom: device is busy
Before you free it, let's find out who is using it.
# fuser /media/cdrom
You see the process was running and, indeed, it is our fault we can not eject the disk.
Now, if you are root, you can exercise your godlike powers and kill processes:
# fuser -k /media/cdrom
Boom! Just like that, freedom. Now solemnly unmount the drive:
# eject
fuser is good.
Try this:
# cat /bin/cat
Behold! Your terminal looks like garbage. Everything you type looks like you're looking into the Matrix. What do you do?
You type reset. But wait you say, typing reset is too close to typing reboot or shutdown. Your palms start to sweat—especially if you are doing this on a production machine.
Rest assured: You can do it with the confidence that no machine will be rebooted. Go ahead, do it:
# reset
Now your screen is back to normal. This is much better than closing the window and then logging in again, especially if you just went through five machines to SSH to this machine.
David, the high-maintenance user from product engineering, calls: "I need you to help me understand why I can't compile supercode.c on these new machines you deployed."
"Fine," you say. "What machine are you on?"
David responds: " Posh." (Yes, this fictional company has named its five production servers in honor of the Spice Girls.) OK, you say. You exercise your godlike root powers and on another machine become David:
# su - david
Then you go over to posh:
# ssh posh
Once you are there, you run:
# screen -S foo
Then you holler at David:
"Hey David, run the following command on your terminal: # screen -x foo."
This will cause your and David's sessions to be joined together in the holy Linux shell. You can type or he can type, but you'll both see what the other is doing. This saves you from walking to the other floor and lets you both have equal control. The benefit is that David can watch your troubleshooting skills and see exactly how you solve problems.
At last you both see what the problem is: David's compile script hard-coded an old directory that does not exist on this new server. You mount it, recompile, solve the problem, and David goes back to work. You then go back to whatever lazy activity you were doing before.
The one caveat to this trick is that you both need to be logged in as the same user. Other cool things you can do with the screencommand include having multiple windows and split screens. Read the man pages for more on that.
But I'll give you one last tip while you're in your screen session. To detach from it and leave it open, type: Ctrl-A D . (I mean, hold down the Ctrl key and strike the A key. Then push the D key.)
You can then reattach by running the screen -x foo command again.
You forgot your root password. Nice work. Now you'll just have to reinstall the entire machine. Sadly enough, I've seen more than a few people do this. But it's surprisingly easy to get on the machine and change the password. This doesn't work in all cases (like if you made a GRUB password and forgot that too), but here's how you do it in a normal case with a Cent OS Linux example.
First reboot the system. When it reboots you'll come to the GRUB screen as shown in Figure 1. Move the arrow key so that you stay on this screen instead of proceeding all the way to a normal boot.



Figure 1. GRUB screen after reboot
Getting back the root password 
Figure 2. Ready to edit the kernel line
Getting back the root password 
Figure 3. Append the argument with the number 1
Getting back the root password 
New UNIX password:
Retype new UNIX password:
passwd: all authentication tokens updated successfully

Figure 4. Poking a hole in the firewall
Poking a hole in the firewall 
Here's how to proceed:
  1. Check that what you're doing is allowed, but make sure you ask the right people. Most people will cringe that you're opening the firewall, but what they don't understand is that it is completely encrypted. Furthermore, someone would need to hack your outside machine before getting into your company. Instead, you may belong to the school of "ask-for-forgiveness-instead-of-permission." Either way, use your judgment and don't blame me if this doesn't go your way.
  2. SSH from ginger to blackbox.example.com with the -R flag. I'll assume that you're the root user on ginger and that tech will need the root user ID to help you with the system. With the -R flag, you'll forward instructions of port 2222 on blackbox to port 22 on ginger. This is how you set up an SSH tunnel. Note that only SSH traffic can come into ginger: You're not putting ginger out on the Internet naked.
    You can do this with the following syntax:
    ~# ssh -R 2222:localhost:22 thedude@blackbox.example.com
    Once you are into blackbox, you just need to stay logged in. I usually enter a command like:
    thedude@blackbox:~$ while [ 1 ]; do date; sleep 300; done
    to keep the machine busy. And minimize the window.
  3. Now instruct your friends at tech to SSH as thedude into blackbox without using any special SSH flags. You'll have to give them your password:
    root@tech:~# ssh thedude@blackbox.example.com .
  4. Once tech is on the blackbox, they can SSH to ginger using the following command:
    thedude@blackbox:~$: ssh -p 2222 root@localhost
  5. Tech will then be prompted for a password. They should enter the root password of ginger.
  6. Now you and support from tech can work together and solve the problem. You may even want to use screen together! (See Trick 4.)
Trick 6: Remote VNC session through an SSH tunnel
  1. VNC or virtual network computing has been around a long time. I typically find myself needing to use it when the remote server has some type of graphical program that is only available on that server.
    For example, suppose in Trick 5, ginger is a storage server. Many storage devices come with a GUI program to manage the storage controllers. Often these GUI management tools need a direct connection to the storage through a network that is at times kept in a private subnet. Therefore, the only way to access this GUI is to do it from ginger.
    You can try SSH'ing to ginger with the -X option and launch it that way, but many times the bandwidth required is too much and you'll get frustrated waiting. VNC is a much more network-friendly tool and is readily available for nearly all operating systems.
    Let's assume that the setup is the same as in Trick 5, but you want tech to be able to get VNC access instead of SSH. In this case, you'll do something similar but forward VNC ports instead. Here's what you do:
    1. Start a VNC server session on ginger. This is done by running something like:
      root@ginger:~# vncserver -geometry 1024x768 -depth 24 :99
      The options tell the VNC server to start up with a resolution of 1024x768 and a pixel depth of 24 bits per pixel. If you are using a really slow connection setting, 8 may be a better option. Using :99 specifies the port the VNC server will be accessible from. The VNC protocol starts at 5900 so specifying :99 means the server is accessible from port 5999.
      When you start the session, you'll be asked to specify a password. The user ID will be the same user that you launched the VNC server from. (In our case, this is root.)
    2. SSH from ginger to blackbox.example.com forwarding the port 5999 on blackbox to ginger. This is done from ginger by running the command:
      root@ginger:~# ssh -R 5999:localhost:5999 thedude@blackbox.example.com
      Once you run this command, you'll need to keep this SSH session open in order to keep the port forwarded to ginger. At this point if you were on blackbox, you could now access the VNC session on ginger by just running:
      thedude@blackbox:~$ vncviewer localhost:99
      That would forward the port through SSH to ginger. But we're interested in letting tech get VNC access to ginger. To accomplish this, you'll need another tunnel.
    3. From tech, you open a tunnel via SSH to forward your port 5999 to port 5999 on blackbox. This would be done by running:
      root@tech:~# ssh -L 5999:localhost:5999 thedude@blackbox.example.com
      This time the SSH flag we used was -L, which instead of pushing 5999 to blackbox, pulled from it. Once you are in on blackbox, you'll need to leave this session open. Now you're ready to VNC from tech!
    4. From tech, VNC to ginger by running the command:
      root@tech:~# vncviewer localhost:99 .
      Tech will now have a VNC session directly to ginger.
    While the effort might seem like a bit much to set up, it beats flying across the country to fix the storage arrays. Also, if you practice this a few times, it becomes quite easy.
    Let me add a trick to this trick: If tech was running the Windows® operating system and didn't have a command-line SSH client, then tech can run Putty. Putty can be set to forward SSH ports by looking in the options in the sidebar. If the port were 5902 instead of our example of 5999, then you would enter something like in Figure 5.

Figure 5. Putty can forward SSH ports for tunneling
Putty can forward SSH ports for tunneling 
If this were set up, then tech could VNC to localhost:2 just as if tech were running the Linux operating system.
Imagine this: Company A has a storage server named ginger and it is being NFS-mounted by a client node named beckham. Company A has decided they really want to get more bandwidth out of ginger because they have lots of nodes they want to have NFS mount ginger's shared filesystem.
The most common and cheapest way to do this is to bond two Gigabit ethernet NICs together. This is cheapest because usually you have an extra on-board NIC and an extra port on your switch somewhere.
So they do this. But now the question is: How much bandwidth do they really have?
Gigabit Ethernet has a theoretical limit of 128MBps. Where does that number come from? Well,
1Gb = 1024Mb1024Mb/8 = 128MB; "b" = "bits," "B" = "bytes"
But what is it that we actually see, and what is a good way to measure it? One tool I suggest is iperf. You can grab iperf like this:
# wget http://dast.nlanr.net/Projects/Iperf2.0/iperf-2.0.2.tar.gz
You'll need to install it on a shared filesystem that both ginger and beckham can see. or compile and install on both nodes. I'll compile it in the home directory of the bob user that is viewable on both nodes:
tar zxvf iperf*gz
cd iperf-2.0.2
./configure -prefix=/home/bob/perf
make
make install
On ginger, run:
# /home/bob/perf/bin/iperf -s -f M
This machine will act as the server and print out performance speeds in MBps.
On the beckham node, run:
# /home/bob/perf/bin/iperf -c ginger -P 4 -f M -w 256k -t 60
You'll see output in both screens telling you what the speed is. On a normal server with a Gigabit Ethernet adapter, you will probably see about 112MBps. This is normal as bandwidth is lost in the TCP stack and physical cables. By connecting two servers back-to-back, each with two bonded Ethernet cards, I got about 220MBps.
In reality, what you see with NFS on bonded networks is around 150-160MBps. Still, this gives you a good indication that your bandwidth is going to be about what you'd expect. If you see something much less, then you should check for a problem.
I recently ran into a case in which the bonding driver was used to bond two NICs that used different drivers. The performance was extremely poor, leading to about 20MBps in bandwidth, less than they would have gotten had they not bonded the Ethernet cards together!
A Linux systems administrator becomes more efficient by using command-line scripting with authority. This includes crafting loops and knowing how to parse data using utilities like awkgrep, and sed. There are many cases where doing so takes fewer keystrokes and lessens the likelihood of user errors.
For example, suppose you need to generate a new /etc/hosts file for a Linux cluster that you are about to install. The long way would be to add IP addresses in vi or your favorite text editor. However, it can be done by taking the already existing /etc/hosts file and appending the following to it by running this on the command line:
# P=1; for i in $(seq -w 200); do echo "192.168.99.$P n$i"; P=$(expr $P + 1);
done >>/etc/hosts
Two hundred host names, n001 through n200, will then be created with IP addresses 192.168.99.1 through 192.168.99.200. Populating a file like this by hand runs the risk of inadvertently creating duplicate IP addresses or host names, so this is a good example of using the built-in command line to eliminate user errors. Please note that this is done in the bash shell, the default in most Linux distributions.
As another example, let's suppose you want to check that the memory size is the same in each of the compute nodes in the Linux cluster. In most cases of this sort, having a distributed or parallel shell would be the best practice, but for the sake of illustration, here's a way to do this using SSH.
Assume the SSH is set up to authenticate without a password. Then run:
# for num in $(seq -w 200); do ssh n$num free -tm | grep Mem | awk '{print $2}';
done | sort | uniq
A command line like this looks pretty terse. (It can be worse if you put regular expressions in it.) Let's pick it apart and uncover the mystery.
First you're doing a loop through 001-200. This padding with 0s in the front is done with the -w option to the seq command. Then you substitute the num variable to create the host you're going to SSH to. Once you have the target host, give the command to it. In this case, it's:
free -m | grep Mem | awk '{print $2}'
That command says to:
  • Use the free command to get the memory size in megabytes.
  • Take the output of that command and use grep to get the line that has the string Mem in it.
  • Take that line and use awk to print the second field, which is the total memory in the node.
This operation is performed on every node.
Once you have performed the command on every node, the entire output of all 200 nodes is piped (|d) to the sort command so that all the memory values are sorted.
Finally, you eliminate duplicates with the uniq command. This command will result in one of the following cases:
  • If all the nodes, n001-n200, have the same memory size, then only one number will be displayed. This is the size of memory as seen by each operating system.
  • If node memory size is different, you will see several memory size values.
  • Finally, if the SSH failed on a certain node, then you may see some error messages.
This command isn't perfect. If you find that a value of memory is different than what you expect, you won't know on which node it was or how many nodes there were. Another command may need to be issued for that.
What this trick does give you, though, is a fast way to check for something and quickly learn if something is wrong. This is it's real value: Speed to do a quick-and-dirty check.
Some software prints error messages to the console that may not necessarily show up on your SSH session. Using the vcs devices can let you examine these. From within an SSH session, run the following command on a remote server: # cat /dev/vcs1. This will show you what is on the first console. You can also look at the other virtual terminals using 2, 3, etc. If a user is typing on the remote system, you'll be able to see what he typed.
In most data farms, using a remote terminal server, KVM, or even Serial Over LAN is the best way to view this information; it also provides the additional benefit of out-of-band viewing capabilities. Using the vcs device provides a fast in-band method that may be able to save you some time from going to the machine room and looking at the console.
In Trick 8, you saw an example of using the command line to get information about the total memory in the system. In this trick, I'll offer up a few other methods to collect important information from the system you may need to verify, troubleshoot, or give to remote support.
First, let's gather information about the processor. This is easily done as follows:
# cat /proc/cpuinfo .
This command gives you information on the processor speed, quantity, and model. Using grep in many cases can give you the desired value.
A check that I do quite often is to ascertain the quantity of processors on the system. So, if I have purchased a dual processor quad-core server, I can run:
# cat /proc/cpuinfo | grep processor | wc -l .
I would then expect to see 8 as the value. If I don't, I call up the vendor and tell them to send me another processor.
Another piece of information I may require is disk information. This can be gotten with the df command. I usually add the -h flag so that I can see the output in gigabytes or megabytes. # df -h also shows how the disk was partitioned.
And to end the list, here's a way to look at the firmware of your system—a method to get the BIOS level and the firmware on the NIC.
To check the BIOS version, you can run the dmidecode command. Unfortunately, you can't easily grep for the information, so piping it is a less efficient way to do this. On my Lenovo T61 laptop, the output looks like this:
#dmidecode | less
...
BIOS Information
Vendor: LENOVO
Version: 7LET52WW (1.22 )
Release Date: 08/27/2007
...


This is much more efficient than rebooting your machine and looking at the POST output.
To examine the driver and firmware versions of your Ethernet adapter, run ethtool:
# ethtool -i eth0
driver: e1000
version: 7.3.20-k2-NAPI
firmware-version: 0.3-0





TV on Linux


A number of cards exist allowing you to watch and record TV on your PC. Most come with software for Windows only, like so many things, but it is possible to do the same thing on Linux. Linux comes with several drivers which make up the Video4Linux drivers. Several cards are supported by these drivers, and a list of them is available at http://roadrunner.swansea.linux.org.uk/v4l.shtml. This is the driver side. You also need software to use the devices.
Several programs are available to watch TV, capture images and even Web applications. A list of some of the
programs is available at http://www.thp.uni-koeln.de/~rjkm/linux/bttv.html including datasheets.

Find hardware information in Linux


When the Linux system boots, it will try to detect the hardware installed in the computer. It will then make a fake file system called procfs and will store important information about your system in it. You can get information about your system simply by browsing the directory /proc. The files in there will contain information such as the processor you have, the amount of memory and the file systems the kernel currently supports. A usefull application exists to browse the /proc file system. It is called Xproc and is available from http://devplanet.fastethernet.net/files.html


Detecting 2 ethernet cards in Linux


To configure an ethernet card in Linux, you need to enable it in the kernel. Then the kernel will detect your ethernet card if it is at a common IO port. But it will stop there, and will never check if you have 2 ethernet cards. The trick is to tell the ethernet driver that there are 2 cards in the system. The following line will tell the kernel that there is an ethernet card at IRQ 10 and IO 0x300, and another one at IRQ 9 and IO 0x340: ether=10,0x300,eth0 ether=9,0x340,eth1 .You can add that line on bootup at the "boot:" prompt, or in the /etc/lilo.conf file. Don't forget to run: lilo
That will reload the lilo.conf file and enable changes.

Auto mounting a partition


It’s been a while. A while since I’ve had to actually had to manually edit the /etc/fstab to automount a partition. So long, that I searched my blog trying to find out how to do it. To my surprise, I’d never actually written one. If I had, I couldn’t find it. Here’s to you, memory:
According to /etc/fstab this is how it’s done
# <file system> <mount point> <type> <options> <dump> <pass>
For those of us that are human, that can mean very little. What you can do, in hopefully slightly more understandable terms is add a line that looks like this:

Setup SSH Key Authentication (Password-Less Authentication)


Setup ssh key authentication for password-less login between servers.  For use by ssh/sftp users or scripts.
Source Server (or local system)
Generate RSA key for user on this system, you can also use DSA.  This asks for key pass-phrase but you can leave it blank.
ssh-keygen -t rsa
This asks for location to place the generated key, by default it will be your home directory (ex: /home/your_username/.ssh/).  This generates two files:  id_rsaand id_rsa.pub.  Content of id_rsa.pub is what we need to copy to destination server.

Restrict network access by time or IP address with Squid


There are a number of reasons why you would want to restrict network access. You run a cafe with web access or you have young or teenage children and you want them to only be able to use the network at certain times. Their are certainly tools out there to do this on a PC-by-PC basis, but why not employ a proxy server instead? One of the best (and most robust) proxy servers available for the Linux operating system is theSquid Proxy server. But don’t let the name fool you, you do not have to install Squid on a server. You can just as easily install squid on a Linux desktop machine and control network access from your LAN.
Of course when you open up your /etc/squid/squid.conf file you might be a bit overwhelmed. So in this article I am going to show you two ways to limit access with Squid (instead of tossing the whole configuration file at you at once). I will also show you the quick and dirty method of installing Squid on a Fedora 13 machine. Once done with this article, you will at least be able to control network access by time or by IP address. In later articles we will discuss other ways to control network access with Squid.