Tuesday, December 21, 2010

Camera Straps

Since as long as I can remember, I've used the standard camera straps for all of my cameras. Of course point and shoots would use the smaller wrist strap but the SLR cameras were all your basic neck strap with two points connecting to the camera body. About 5 years ago I swapped out using the factory straps for the Op-Tech Pro Strap which I really like. It's a neoprene material on the area where it would rest on your neck and offers more flex than traditional straps, which is handy with heavy cameras. It's also reasonably priced, and I have three of them across my cameras.

In the past few years the trend of different camera straps have been getting popular on the camera forums. From using a sling style strap, hand straps, and the body carriers have all gained a loyal following. As someone who has struggled carrying a should strap camera bag, then a tripod bag, and a camera with another strap, I've taken a look around what would be a better solution. I have not purchased any of the new styled straps but have some comments as I carry my gear in different methods to match my needs. As of now, I'm still using the same Op-Tech camera strap but considering a sling strap.

Keeping the camera close yet out of the way

One issue that I run into is how to secure my camera while also keeping it out of the way while walking or traveling. Sounds confusing but here's a common issue, I carry a shoulder bag which I wear on my left shoulder or across my body on the right shoulder. Then I have a tripod bag, usually carried in the same manner, then I want to have my camera at easy access, again on my neck. Sounds confusing and it is!

I started to take my tripod out less, and now using a beltpack for my gear instead of the standard shoulder bag. But I'm starting to carry my camera instead over the neck, but more like a messenger bag across my back. This method I like for various reasons but it's really easy to keep the camera out of our way, especially with a large lens, also it allows a secure hold of the camera while moving. I had the camera in this position during running or walking through crowds and never had a problem.

Also speaking of crowds, I like to hold the camera in this position because it's not swinging side to side as I walk. I feel it's a smaller profile, and like to keep one hand on my bag to make sure it's also not swinging side to side either. A large bag weighing near 15lbs isn't something to bump into people.

Why I like the standard strap?

As mentioned earlier, I haven't tried the sling straps yet, the prices are about $50~100 for a complete setup. One issue that I really like using the standard strap is two mounting points vs one. The tripod socket is pretty strong on most cameras, built into the camera frame but I'm not sure it's the best place to dangle a camera all day from. If you have been reading some of the forums about these sling straps there's also complaints that the tripod socket connection will work it's self loose, dropping the camera to the ground.

The problem I see here is with the two point strap like a standard camera strap, the body and lens have lens chance to twist around. Many of the sling strap lugs that connect to the tripod socket are fixed like a eyelet bolt, which does not allow free movement, so if the camera starts to spin clockwise, it's going to unscrew the mount. I have seen one brand that added a bearing system to their tripod mount, Sun-Sniper which I think is safer.

Not to mention that the camera, I think, has more secure points with the standard camera strap connections built on the camera body. Since there are two, it's less stress on the body and also as discussed, less twisting. One big plus I like about the standard camera strap is you can wear it either over the neck like a standard strap or over one shoulder like a messenger bag. Then it's more like the sling strap, giving you quick access to the camera but also out of the way.

The point which I think makes the standard camera strap also better is it's closer to the body. Now this is something personal, but I really don't like to have my camera swinging far from my body, especially with a large lens. Even if you're a working pro, the last thing you want is to loose a lens from bumping the corner of a desk while on assignment. I've seen some of the straps hang very low and loose, really asking to be banged up against anything.

Another point against  the sling style is users of quick release tripod camera mounts. Most tripods now use beveled edge mounts to mount the camera to the tripod head, usually these are metal plates mounted to the camera's tripod socket which are a bevel to hold the camera to the tripod head. There's no place to mount a sling style strap unless you remove the camera mount and replace it with the loop for the sling strap. I like to use my tripod when I'm out taking photos but this would require me to completely change my setup.

Hand straps?

There's some photographers who don't use a strap at all in the traditional sense. The strap of choice is a hand strap which I'm seriously considering. The sling and standard camera strap both support the camera while carrying it, but they don't offer any support while actually using the camera. During a long coverage for a wedding, I held my camera with flash unit, plus a heavy lens all day, after which I really wished for some hand support. I've been looking at the hand straps as they offer two positive points.

One is the added strap to hold the camera to your hand, instead of using your fingers for grip, you can relax your hand and the weight is now supported by the arm muscles. The other is it's out of the way, less straps to deal with and easier to pack in the camera bag. The slight downside is that you will need either a battery grip or attachment to mount the hand strap, also the biggest problem, no saftey strap incase you drop the camera.

I like to have my camera teathered to myself at all times, or on flat ground. I try to be very careful with my gear but of course mistakes and accidents happen, so I try to take all pre-cautions as possible. I would assume you could strap on a hand strap and camera strap to the same top mount but it would be very hard to have both straps through the same mounting point. A plus to the sling style straps as they use the tripod mounting point.

What is really the best strap?

Really depends on your photography style and needs. I really like to have a multipurpose strap and the standard camera strap does that well for me. I can carry it in the normal position, also I can carry it like a sling.  Also it's not in the way when I'm walking but can be right in front when needed. It's also great that I can use other attachments for the tripod socket as needed, like a quick release bracket or another attachment.

I think for now, I'm going to stick with the standard camera strap and maybe purchase a hand strap to save arm strength, this gives me the most options.

Wednesday, November 17, 2010

Finding a job by reversing the roles

In times when jobs are harder to find, you need to stay active on your resume and constantly check job postings to stay current. In the last few months, I've been reading mostly job boards, about a few times a week, looking for job roles that I match, checking the requirements, skills they request, and type of work. I also note if the company type, if it's a start-up, contract position, or contract to hire, including job title. 

It's really interesting and if you have read What Color is your Parachute? the best method to find a job is by working in reverse order. How does that work? First, you need to think about the need for the job? When there's a need for a new job position, often times it's requested by a hiring mananger, who then reports the opening to the HR department, then is posted on the Internet, which you might see and reply to. The problem is that you're applying for a job to the person who might have the least amount of knowledge for the job, then filters the contacts to the hiring manager.

Not to mention that you may be the perfect employee, but you need to filter past many hands and departments, it's more difficult than just having the right skills to get the interview. When thinking about this problem, I think about the other side, from the hiring manager's perspective.

For the hiring manager, they know what to look for and what they want in an employee. The skills required and the function this person may do in their role. But it's another thing to sort out the people with the actual skills versus the people who may over rate their skills on a resume, as anyone can add mention a technical phrase to a recruiter to get an interview. So now, the hiring manager will now need to filter out all of these resumes, and usually a long task of finding the right person.

The big problem I see of this is that there's a large communication problem between the two main parties, the person who wants a job and the hiring manager who wants to hire someone. With many parties in between, it's a common issue that while you may be the perfect person for the job, a person down the chain of the hiring process may have other ideas. It could be as simple as you are not experienced but still have the knowledge, or maybe you understand product X but they are looking for someone who knows product Y.

It's very confusing, and often times I think that the hardest part of finding a job is getting past the hr department, as their role is often not to hire someone but to filter out those who do not match the job perfectly. Not to say hr department is easy, trying to become an expert for many departments in a company and filtering out the correct people is an incredible difficult job.

I think keeping this in mind is a good thing to remember when writing a resume. Often you need to really show what you can do and remember that the resume is the first item a hiring manager will see of you. It's really hard to show myself based upon a piece of paper, but considering the amount of time people can spend on each resume, it's the best to be honest and place your skills clearly. Also remember to tune your resume for the job as your applying for.

This means that if you're applying for a job working with customer service, highlighting your customer service skills will be better than listing other skills (but those might be a close second). The idea before was that you would make one resume as a "one size fits all" approach but now it's a resume by job idea that I feel works best. In some cases, I had recruiters call me for one job role which I had skills for, and ask if I could do these tasks. Since I didn't list the skills by the actual name on the job description they were reading from, the recruiter asked me a few questions then told me I probably don't have the skills and wished me luck on my job search.

I recommend that if you're looking for a certain job role, make a resume especially suited for the job. Then if you have another secondary job role, make another resume for that. I think this is somewhat the idea in fishing, as one lure might work ok for catching fish in a large lake but having multiple lures will work even better as different fish might be attracted to different types of lures.


Wednesday, November 10, 2010

Keeping your data safe

When I purchased my first digital camera, a Kodak DC280 I took photos of my friends and other events enjoying the new to me technology. But of those photos, very little I have now due to a accidental formatting I did on my main computer. Back then, I would reformat my computer, reinstall Windows, and every time I think I lost some data here or there, either photos, or school work, or game saves, it was always something I would remember when the computer was finishing up the Windows install. Of course, at this point, it was too late to recover and the data was basically gone. I had some backups but nothing that covered all of the data I lost.

After years of working with data on a personal level and also working with data on a business enterprise level I have some tips and tricks for the average end user. Here's my recommendations, and they are all low cost, where possible, no subscription needed.

A safe backup

A common mistake is backing up your data on the same computer. Recently while rebuilding my laptop, I installed a second hard drive, the same size and manufacture as the first drive. When I reinstalled Windows, the installer asked which drive I wanted to install to, so I figured it was drive 0, not the second drive I installed which was drive 1. When the install was done, I checked my second drive to retrieve the data I stored on drive 1, but it was all gone. I checked again and found that when I reinstalled Windows, I installed Windows on the data backup drive, not the Windows system drive.

I couldn't figure out how this mistake took place until I realized that the manufacture installed the Windows OS on the wrong drive, drive 1, then the new drive I installed was drive 0. Lucky for me, I only lost a bunch of ISO images I downloaded but it could have been worst. This example is why backing up on the same computer is not really a solid "backup".

For a better backup solution you should think about the following. Is my data safe if my computer's hard disk crashes? What if my computer's second hard disk crashes? What is my backup drive is stolen? Below I wrote a listing from least safe to most safe for backups which I personally recommend for home users.

1) Backing up on the same drive - This is great for working on a document where you are making edits and is a work in progress, but you should save your work end of day to another drive or location like a USB stick.

2) Backing up on another drive, same computer - This is a good solution for storing items that need quick access like photos or music. In most cases the second hard drive has much less use than the main drive and would equal less chance of failure but it would not protect against theft of the computer nor a virus.

3) Backing up on an external drive - Now we are starting to get into a much safer solution. Now you're data is easily accessible by USB, but it's also not too hard to access, a great combination. The only issue is theft or fire in the home but this is covered if you storage the drive in a safe location, or if you keep another backup in a different physical location.

4) Backing up on an external drive, also using on-line backup - This is the method I use and it's been pretty handy. While it's not free, typically costing about $100 per year, my data is backed up once by myself and then automatically by the backup provider. Since I backup not daily but every time I take a large amount of images from my camera, I wanted to have something to backup the small files I was working on and this works perfectly.

Know where your data is located

What is the best method to backup your data? I would first recommend to know where you data is located, then what is the most important data to backup. One of my biggest mistakes when I first started to use the computer was I stored data all over the place on my computer. From different drives, keeping data on my personal home folder, or keeping it under another folder, it was a large mess. Now, I use a different method for keeping the data in one location.

For a Windows system, I store all of my data under the account's document folder, this way if I backup the "My Documents" folder, then I can save all of my web page links, settings, Outlook PST files, photos, data, etc. On a Linux system, I follow the same method, I just make sure to save all of my work under /home/robert and this makes things much easier to manage.

How to backup your data

Backing up data is done in many methods but I really prefer simple over complex. There's plenty of free software, paid software and great built in software available for the home computer. I recommend that you use either the built in software for backing up or a simple file copy to backup your data. Here's my personal reason why.

When you use a special software to backup your data, you're usually compressing the files for the backup into a new format for backup. In most cases, the new format is not cross compatible, for example if I'm using brand X for my backup and it uses a formatting called "mybackup.mmg" then it's typically only going to work with software that knows about mmg files, in this case, the brand X application.

The big issue about this is that your data is only safe as long as the brand X software is available, which given the rate of software changes, could be months to years. For this reason I really only use two methods to backup my data on a Windows system. Either Windows Backup, which is built into most Windows systems that are higher than "home editions", or a simple file copy. In the years of using Windows Backup, I never had a issue where one version of Windows could not read another version's backup files, and thanks to the automated process with a simple wizard it's very easy to use.

Recently I'm now using a simple script to run a copy job from my desktop to a USB drive, not using any special backup software. For me this runs very easy, I can have access to the backed up files at time and as long as the drive is still working, my data is accessible.

Size limits of backup

In most cases unless you have a large amount of backup storage, you will not want to backup the entire profile on your computer. You might want to have the data easily accessible from a USB drive as D:\photos instead of D:\backup\robert\photos. Here's comes a common issue, how can I backup everything that is important without the extras that are not important? Let's say, I want to only backup my documents and Outlook PST data from my workstation?

If someone was to ask me this I would to the following steps on a Windows system.

1) Where are you saving your data? It's easy to assume "My Documents" but I have seen users save data to either the root of the main system drive, in most cases C:\ in a folder like "my data" and even saved under a strange folder like "09". It's my recommendation that you search the drives for Microsoft Office documents, images, music, Microsoft PST files at the very least. I usually use a search string as "*.xls; *.xlsx; *.doc; *.docx; *.pst; *.jpg; *.jpeg; *.mp3; *.wma; *.wmv; *.mov; *.m4a; *.mpg; *.mpeg".

This covers most of the common file formats including Apple iTunes formats but check twice and make sure you record all of the possible paths the user might have used.

2) Once you know the data locations you can either run the system's backup software or a simple file copy to backup your data to another location.

Test your backup solution

Before thinking everything is ok, test your backup solution once in a while before committing this process for months or years. It's easy to restore a few files and this small test will save you hours later from recreating work, if it's even possible to recreate. :)

Hope this has been helpful and here's some links for reference.

Windows XP Backup Made Easy

Mozy Backup Service

Crashplan Backup Service

A simple Linux backup plan


Sunday, November 07, 2010

Going with Ubuntu full time

Recently I've been using Ubuntu 10.04 LTS for my main laptop, switching from Windows Ultimate 7. In the past I had Linux installed on either a old laptop or a part time computer but now I'm working with Linux on a main computer. The easy part is finding how well Ubuntu installs, with drivers, the only part that needed installing a separate driver was the Nvidia video card.

Other than the driver, the rest of the applications were installed, Open Office, VirtualBox, Gwibber, and a few extras that are also available on Windows. I also used DropBox to keep my other workstations updated with the same files from Windows to Linux, the application alone makes the transition so much easier.

But after getting everything I needed from Linux, there was some problems. One of the issues is Microsoft Office and working with Open Office formats. If you open a Microsoft Office Word document 2007 DOCX format file in Open Office the formatting is not 100% correct, and vice versa a Open Office ODT format file will not open correctly in Microsoft Office. I tested this out at home but didn't notice any issues until I started to send out documents to other users that I saw many people having difficulties opening the files.

Another issue is formatting with certain browsers or applications only available for Windows. One of my favorite web applications is watching Netflix, which streams the video using Microsoft's Silverlight. As far as I know, there is no official alternative for Silverlight on Linux, but you can use Mono as a possible solution. I have not tried this so I can not speak on the reliability of this solution.

The formatting issue is not really a problem on Linux but a problem of people using either ASP or Internet Explorer specific formatting versus formatting for non-IE browsers. While the percentage of web sites that have issues is going down, there's still that strange website that will always have problems. Also for the Linux users who need VPN access, most cases VPN clients are for Windows users. I got around this issue but running a Windows XP virtual machine in VirtualBox, it's not really a solution rather a work around. The same work around I recommend to Mac users who wanted VPN access using their Mac hardware.

Other than the strange site or special application, my Ubuntu machine has been working excellent and funny that even the control buttons on the Asus laptop that did not work with Windows 7 work with Ubuntu. So far, running all of my favorite applications including my new favorite Docky for the desktop and the transition has been very seamless.

It's really impressive how much Linux and Ubuntu have come along, with the recent change of Internet Explorer loosing the top spot for browser this might allow even more adaptation of Linux to work further.


Friday, July 09, 2010

Thinktank Speed Racer camera bag review

A waistpack?

The 80's, neon, roller skates and old men. Those are images I picture when I hear the words “waistpack”. You don't see many people wearing them, except for joggers or maybe that shopper who has bundle of coupons within arms reach. So when I ventured on the idea of a bag that actually can hold a decent amount of gear, yet not become too much on my shoulder, I didn't think it would have been a waistpack. I broke my collar bone three years ago but still can't carry things on my right shoulder for too long before it gets sore, and that's the shoulder I usually carry bags with.

Searching for the perfect bag

My experience with camera bags is like most people, I went through a few shoulder bags, the classic block shapes with a usually thin strap holding my gear. After a while I found that carrying the amount of gear I wanted on the trip was not as easy to hold. In most photo trips I took, I was doing lots of walking and lugging a bag over my shoulder wasn't comfortable. Soon I tried backpacks but then I ran into another problem, you had to take them off to get to the camera. I also found that backpacks in general were usually really bulky and deep.

Also for the backpacks that are thinner, they can barley hold a standard DSLR with battery grip attached, which I normally equipped on all of my cameras. Another solution I researched was the new sling style bags, a cross between a standard shoulder bag but worn like a messenger bag. I liked the idea but many of the slings were too small to really carry enough gear. Also another problem they put too much pressure against one shoulder.

Benefit of the waistpack design

My only experience with waistpacks has been playing paintball. The paintball pod packs designed for carrying paintballs are usually waistpacks, they are pretty easy to carry over 1,000 paintballs and still run around without any stress on your back. From searching B&H's catalog and other sites, I found a few camera bags based upon this design. After much research and debate, I ended up purchasing the Thinktank Speed Racer waistpack and going to briefly go over the design here, also touch on how well it works.

Intro to the Speed Racer

From Thinktank's web site, the Speed Racer is their camera belt pack, that is built around one major compartment vs their other designs that are built upon a modular design of smaller packs. This is more personal preference but I wanted something less busy and simple, just one larger pack than buying packs for each item I'm going to bring with me. Important to note that you can add packs to the Speed Racer if you want.

First off the construction of the Speedracer is well done, This feels like a quality bag, slightly heavy but very solid with some interior foam to hold it's shape. The padding and included dividers split the interior into three spaces but I think most people would prefer splitting the interior into two spaces. The bag personally feels like a Lowepro and it was not surprising to learn the co-founder, Doug Murdoch was a lead designer from Lowepro. It is not using materials as thick as a Domke but honestly I think it's still built just as well. Guess time will really tell.

Overall design

The bag is a larger designed bag which some people may find too large compared to the newer style messenger bags. Nice feature is this bag can handle a pro DSLR with lens, where often other messenger bags require you to remove the lens to store the camera. Here's a size reference with the 5d markII and bag together.

There is the main compartment as previously described, here's an image with the 5d markII next to a 580EX II flash unit.

As you can see, there's enough room for the 5d, with grip and RRS l-frame, plus flash unit, and extras in a little space below. If you look very carefully you can see a 15mm fisheye hiding under it all. While the fitment is a little tight, it's not bad and still enough room to add a longer lens or even a few more items. With the 70-200mm f/2.8 instead of the flash unit the space was tight enough that it took two hands to close the top.

The top of the bag's lid has a nice clear pocket for storing thin items, guessing best of a small map? It's not enough for anything thick but maybe a small notebook and pens.

A nice attention to detail is how they made a pocket for the zipper end. Instead of resting outside where it can scratch your camera body.

The lid also has a "Lowepro" style open top, very similar to the Stealth bags. Now I heard there's some complaints about this not being useful but honestly I like it. It's important to understand that the size is not designed for removing the camera, but smaller items such as a lens or accessories. Here I am holding it open to almost maximum size.

The bag also has a "hidden" two pockets for memory cards and a rain cover, with tethered strap.

Tethered strap for rain cover.

Rain cover, made of decently thick material. Doubt it would rip if taken roughly.

Moving toward the front of the bag, there is a main pocket which contains two smaller pockets.

The pocket in the rear is too small for anything but maybe a thin manual, while the front sub pocket holds the Pixel Pocket Rocket. A nice feature of the Pixel Pocket Rocket is a tether to the main, which is removable if needed. Also this holds 8 compact flash cards, in a easy to use folding style, think Velcro wallet.

Easy to access the cards.

Another place to store you business cards.

Now moving to the side of the bag, here's a mesh pocket and another pocket right behind.

The mesh pocket is very flexible, with a cinch strap to make sure you're items don't drop out, while the back pocket is more firm. In the photo above I have a media reader in the mesh pocket, while a 580EX battery pack in the back pack.

On to the straps

One area I find lacking in bags is how they use the straps and buckles. The Speed Racer uses a unique style of a material loop which a nice metal buckle is linked it.

At first I was somewhat skeptical about this design but looking further, I found that the loop is solidly connected to the bang with a long strip of material reaching down into the side pockets.

The actual shoulder strap which is used as additional support or an option over the belt is covered by a thick foam. The foam has some slip and is not grippy, as it's designed to slide on your shoulder when you rotate the bag. Thinktank does include a grippy pad if you intent to only use the shoulder strap which I would also recommend. It might be hard to see but the strap is thinner than other straps I've used to so be aware if you intend to only wear the shoulder strap.

The actual belt pack is designed pretty awesome.

The strap is pretty thick but taper off to the buckle allowing you to move freely or bend over. The main black belt is made of a thicker plastic covered by nylon, while the lighter gray is more flexible and is lightly padded. You can also add additional items on the black belt part as needed from Thinktank's modular belt system.

How does it fit and work?

Wearing the Speed Racer is fairly simple. You adjust the belt to your waist and if needed wear the shoulder strap. When you want to access your gear you can either rotate the bag to the front, or just keep the bag in the front position.

On sizing, I did ran into a slight problem.

As you can see, I have the main belt almost to the smallest size possible. Now I wear a size 32" pants, which I don't think is that small but from this bag, I'm near the limit. For women or thin guys, this might be a deal breaker and honestly not sure what you can do if the bag is too big for you. It's important to note that the bag is not designed to be very tight, more able to move and adjust as needed.

As with the shoulder strap I ran into the same problem. The shoulder strap is almost at the smallest size yet it just barley fits my frame. I'm 5'9" and 140lbs. so this might give you an idea how well this will fit smaller people.

On the flip side for larger guys there is plenty of space for adjustment.

The details

While there is some limitations for me on the sizing I have to say the details really impressed me. I personally liked the zipper design of using cord instead of metal tangs, I think these are less easy to scratch your gear or car.

The Thinktank logo is in many places on this bag.

Overall impressions

Honestly I really like this bag. For a long time I only used shoulder bags, mostly my Domke J1 but with my shoulder not being able to carry a heavy bag for a long walk I had to look for another solution. With backpacks I liked the support but didn't like the size and trying to get my gear out quickly. The belt pack/waistpack offers a nice in between solution that I think will make for a great all day bag, especially for events like photography at a race or hike.

Now it's not perfect, as noted by the limited sizing but if you are in smaller size you might be better off going with Thinktank's module belt system instead of the Speed Racer. For the record, you can buy the modular belt system and Speed Racer for about the same price.

I'm going to follow up this post in a few months to see how a long term review of the bag is and update with nay problems I may discover. So far everything looks great!

Thursday, July 08, 2010

Is Linux harder to learn?

Last week I had a long phone conservation with a good friend who moved out of the area a few states to the East. I haven't spoken to him in a few months, just things were busy here and he was busy with his move. After the phone conservation moved from catching up, I mentioned how I'm working in a multi-OS environment with Windows and Linux. As I was giving some details on a recent headache, my friend commented, "I tried to learn Linux but it was too frustrating".

I actually was surprised by the comment not for the fact that he was learning Linux but the fact that he struggled with the OS. When we discussed how similar in a problem I was also at, and that many books while aimed towards new users, were very unclear, it made me wonder is Linux really more difficult than Windows to learn?

What are we used to?

For most users, their first experience with a computer is a Windows desktop. This might be a simple desktop with a mouse and keyboard setup, running general applications such as Internet Explorer. Now from a user's perspective, this is a very simple to use computer, little to no knowledge is needed, you could get someone up and running within a few minutes. Possibly in a day you could have someone running decently understanding how to create e-mails, writing documents, save/copy/paste, basically do simple tasks on the Internet.

Now let's break down how the user is viewing the computer. For most users, there is mostly visual memorization how to get where, such as icons, when you right click the drop down menu shows your available options, etc. Thinking about this, moving toward a non visual OS would be a big change but how hard is this really?

Why it's hard to understand the basics

When I first started playing with Linux I ran the desktop version on my home computer, did a few simple applications and that was it. Beyond a different and free OS, there wasn't much else I could do. Why? For one part, I was heavily tied to applications that were not available on the Linux platform, for example Microsoft Office and Adobe Photoshop. I was used to using the same applications on Windows and my problem was not with the OS but not understanding how to use a application.

Once I got past the transition and understood OpenOffice and Gimp, I was able to pretty much work as normal, there was some adjustment period but soon after that, everything was the same. Thinking the same thing towards using the command line, it's very similar. You are used to using the mouse, then switch over to using just typed commands, which takes some time but like switching from one application to another, it's a matter of getting used to the new functions.

Another point why it's hard to understand the command line is how much it's used in Linux verses Windows. In Windows almost everything (except maybe a few functions) are done with a mouse on the GUI desktop. I've been an Windows admin for a few years and the only functions I know of that really require command line knowledge is when you are doing high level admin changes, or running commands for the Powershell environment. In most cases the majority of functions and settings are done from a visual menu, while there are some applications that use config files it's not nearly as common as in Linux.

For even a high level Windows user, it's easy to see how you can get by without ever using the command line for anything more than simple commands. I worked with some IT staff that could not even run the simplest of DOS commands.

Is Linux and the CLI really more difficult?

My first adventure in the CLI was not met with success. I think I battled for a few minutes just to copy a file, other times I was trying to move the cursor in Vi. It was a somewhat steep learning curve, but once you have a understanding of the basics it's actually very fast.

For example, you want to view the log files on the server. Let's think about this from Windows, you need to right click "My Computer" then select "Manage" then expand "Event Logs" and from there click "System". Ok, now let's try this from Linux, from the command line enter "tail /var/log/messages". That's it. You get a return of the most recent events for the server.

Now while the method described for Windows can be shortened, that's the most common steps to view the logs. It's actually more difficult than remembering one command line, but it's again the same problem that most people are trained visually to use their mouse than remember commands from memory. Once you get down the basics, which are not too hard, things start to make sense.

Text files vs mouse clicks

One area that also took a while to fully understand was why most Linux applications use text files to store their configuration. In Windows you can make changes by clicking on an application usually under the "settings" window. If you ever worked with DNS servers you know that working on a Windows DNS server and a Linux Bind DNS server is very different. One server you can see visually the settings where the other is all stored on conf files.

Which one is easier to work with? If you asked me a few years ago it was Windows hands down but recently it's been Linux. I found out that when you keep a conf file in a text format while it's more difficult to view, it is extremely easy to save. Now think of this, you want to save your configurations, then make a change, but the change caused a problem, how to you revert back.

With a text config file it's very easily done, but with a regular Windows application it's much harder. How do you export your settings and then import the same settings again? It's understanding that once you know how to use the other OS correctly, it's not that hard to learn.

Tuesday, June 08, 2010

Microsoft TechNet benefits

Microsoft TechNet subscription is a very handy tool for Windows based IT admins, or people breaking into the IT world. Let me first talk about my entry to the IT world and how TechNet would have helped me.

When I first got started with IT I was already involved with multimedia applications so I knew heavily about the problems of trial software ending before you really got a chance to play with it. The first book I bought was Windows NT Workstation for Dummies, came without Windows NT so I had to get my first real hands on by using an old Windows NT workstation at work. When I enrolled in the Microsoft classes at college the books came with Windows NT Server and Workstation demo keys, they were good for about 60 days.

Back then, it was difficult to find any other software besides the OS and my learning skills were limited to what software was included in the learning books, typically it was only the OS. Not to mention that downloading software on a 56K modem was not the fastest solution. It was after landing my first IT job that I started to work with applications like Microsoft Exchange and other difficult to play with applications.

Now I know people will say "those are available every where" but take for instance a few facts.

1. I didn't know anyone in the IT world.
2. Didn't want to bootleg a CD.
3. Limited to slow download speeds.

It's tough to get the applications to play with.

Now with faster Internet speeds and networking with friends there's many resources but I feel that going with a TechNet account still the best choice. Here's why.

1. Licenses do not expire! - The best thing I like about TechNet is you are given a license key that does not expire after a certain amount of time, now some of the beta software does end but the rest is just like buying a retail license.

2. Easy to find and download any Microsoft IT based application or server - This is a great reason to down load Exchange 2010 and get it working at home over the weekend. Nothing tells you more than getting it work on your own in your personal home lab.

3. Two free incident calls - Each Microsoft incident call is $265, so that is $530 dollars alone in support calls for a price of $300 (regular price, often cheaper with coupons) subscription price.

4. The extras - Microsoft eLearning, On-line chat, etc. I personally do not like the eLearning and just prefer watching simple videos but it's a nice benefit. I have not tested the on-line chat yet but honestly sounds good and I do have some low level nagging questions I haven't been able to debug yet.

The best part of having the applications available is the testing and since it's so easy to access makes you want to try them all out. I can say that even from my previous job I had a hard time accessing some applications even for work testing and learning. Either the media could not be found or the license manager did not have the keys. Here I am the manager and able to install and test all of the Microsoft server and business applications.

I highly recommend this for people just getting into IT. Where in the Linux world many of the applications are free and open source the Windows world is less open source and based upon Microsoft applications. If you have a robust workstation at home, with a good amount of physical member and VMware Workstation you will be set to do almost every possible IT server environment except SAN simulations.

Where to find it?


Friday, June 04, 2010

Mysterious Windows lockout

"Hello, my account keeps getting locked out"

Anyone who has admin for a Windows environment knows about accounts getting locked out. Typically when a user changes his or her password there are certain applications or processes that need to be manually updated with the new password. If you don't update these passwords, then each time they run, they will authentic your old password, which in turn will fail and register on the Active Directory as 1 failed password attempt.

If you think of a few more services and some mapped drives all trying to re-authentic at the same time, then you can easily be locked out if your Active Directory password policy is set to 5 failed passwords before account lockout. This problem while simple is often difficult to really track down where the problems are coming from.

I'm going to list the tools I use and a recent account lock out issue that I resolved. Please note I removed all of the details for security reasons.

What to look for

Before you make any changes, always gather the information of the user affected and what type of job role this user does. A data entry user will be much easier to resolve than someone who is an admin logging into many remote systems. Here's a listing of questions I usually ask.

1. When was the last time you changed your password?
2. Which account are you using?
3. Where do you login from, only this workstation or multiple computers?
4. Do you work from home on your personal computer?
5. Any special application or service you are using?

From here I can usually get an idea of the level for the user, let's say the user answered the following questions.

1. Two weeks ago, problem started today.
2. Jsmith
3. Only login from this laptop.
4. No, but I work from home on this laptop using VPN.
5. Google desktop search but nothing else.

Now let's check the user's desktop for any lock out messages. You can somewhat skip this part because the lock outs are from the Domain Controller but I like to just make sure I do this. As I checked the system, there is no indication of any security errors or lockouts. Strange, usually there's at least something to go on.

Ok, let's move to the Domain Controller. Checking the security logs I filter by the user name and again there is nothing listed for failed logins. I'm starting to wonder where is the lockouts coming from. The domain I am working on actually logs failed and successful from a group policy so with the amount of logs it could have been over written.

Time to install some tools

Before this I already installed the Windows Account Lockout and Management Toolset from Microsoft. I used the tools from our local Domain Controller and scanned the problem account with the LockOutStatus tool.

LockOutStatus shows a listing of the current status of the target account with details as where the lock outs are taking place and also what time they occur. Here's a screen shot of the tool and why each function is useful.


DC name - Shows which server the account is accessing
Site - Where the servers are located in the AD forest
User state - Is the account locked or unlocked
Bad pwd count - How many bad passwords were attempted
Last bad pwd - This is how long ago the last bad password was attempted
Pwd last set - When the password was last set
Lockout time - The time the accounted was locked out

What to look for?

So what I'm going to look for is this. First, I'm going to run the tool against the problem account, and the tool will automatically find all of the domain controllers. Here's what I look for in each value.

DC name - This is just the names of the DC, pair it up with the Site names if you are not sure where the servers are physically located.

Site - If you already know the DC name this is a bit non-important, unless you are not familiar with the DC's or do not know where a DC physical sits.

User state - This is handy to show the "wave" of lockouts getting updated as the account is locked out from it's primary DC. I alway focus on the DC that is first locked out, then if I can't find anything there move to the next closest DC.

Bad pwd count - This one is a bit tricky. In most cases you would see the number corresponding to your domain policy about bad password attempts. For example if your password count is 5 typically you will see "5" under this field. If you see 5 that would indicate that indeed there is an rogue application or process using an old password. Sometimes this does not mean that there were 5 attempts, but could mean that the DC saw 5 attempts. If you have some delay on your domain, 1 password attempt could register as 2 because of the delay between DC's.

Last bad pwd - This shows just when the account was locked out, for example if you know a user is working 8am to 5pm and the account lockout time shows 3am, then is must be a process running. If the user takes his or her laptop home and powers down at night, there might be a process running on a remote server.

Pwd last set - Of all of the values, I hold this second most important. Many times end users will change their passwords and that's when the problems will start. So I like to see the distance between when the password was set and when the password started to locking out. In most cases it's going to be very close, usually within a week from the password was last set. In some cases, it could be much longer because a service or application is not used every day.

Lockout time - Lockout time is when the DC reports the amount of failed password attempts to match the domain policy. It's a bit confusing and I'm not 100% but if an account is running on a service, and that service runs every 1 hour, it will register 1 lockout. But if the domain policy is set for 5 lockouts in one hour, this will not cause the account to be locked out.

So in this example, let's say we found the following values. We caught it just as the user's account was locked out so only one DC is showing locked.

DC name - DCSF
Site - San Francisco
User state - Locked
Bad pwd count - 5
Last bad pwd - 9:30AM
Pwd last set - 5/20/10 (two weeks ago)
Lockout time - 9:32AM

Ok, from this data I know the following. First, the account is contacting the DC in San Francisco, it's locking out from 5 attempts (5 is the setting in GPO), and it's locking out at 9:30AM. The user starts work at 9:15AM but he says the account is locked out before he can login to the workstation. This might be user error but let's dig further!

Checking the DC's security logs

Now we have a starting point, DCSF. From DCSF I filter out the logs and find this.

Lockout event

Here's the important information (removed real info).

Source: Security
Category: Account Management
Type: Success A
Event ID: 644
Computer: DCSF
User Account Locked Out
Target Account Name: jsmith
Target Account ID: BayareaCo\jsmith
Caller Machine Name: sfauth
Caller User Name: DCSF$
Caller Domain: BayareaCo

Now this is important, the value for "Caller Machine Name" is pointing to "sfauth". I double checked and the user does not access any system besides his own. I checked the event logs of "sfauth" and found this.

Source: Security
Category: Logon/Logoff
Event ID: 539
Computer: sfauth
Reason: Account locked out
User Name: jsmith
Domain: BayareaCo
Logon Type: 3
Logon Process: CISCO
Workstation Name: sfauth

We found the server, now what application is causing this

We now are focusing on the server sfauth since this last event log points to an process called "CISCO". Let's first see what is running on the sfauth. I'm going to use a handy tool from Sysinternals called TCPView. This shows me what processes and who they are talking to, as seen below.

TCP View

So we found that the process is running but let's look at the process details. From TCPView, it looks like a process called "CSAuth.exe" is running on the server talking to DCSF. I'm going to assume that since it's talking to DCSF, it must be doing some kinda of authentication for remote access. I used another Sysinternals tool called Process Explorer.


We found the application, now what?

Ok, so we found that it's an Cisco application called Cisco Secure ACS, but what do we do now? I snoped around and found that the Cisco application had a few GUI tools and a nice logging report tool. I opened the repoting tool and viewed the failed logins.

Below is one of the failed logins reports for user "jsmith".

Cisco Login

From here we have something to follow, a MAC address and an IP address. While the IP address can change, the MAC address is hard coded and unique for each network device. I first started by checking DHCP for where this IP address was located, it was not on the range of IP addresses. I used my favorite tool NMAP to do a port scan of the IP address listed in the failed event.

Below is the result, but there's something wrong with this picture, can you find it?


The MAC address listed here shows 00:13:19 while the logs shows 00:26:08, that's strange! Well if this device is a router then something else is accessing the network beyond what I could see. I did a quick search on a MAC address search tool and found this result.


Apple? We don't have any Apple devices here at work. Or do we? What is made by Apple and often times accessing the network AND would be accessing the network BEFORE the user would login to his computer?


Yes, the iPhone. The user has an iPhone which was accessing the WiFi network. In order to access the WiFi network you need to login with your AD account and this is held on the iPhone. Here's is the login screen for the WiFi access.


Solution and closure

Once I removed this from the user's phone access from the WiFi, the lockouts stopped. When I tried to recreate this problem I couldn't seem to get my iPhone to lockout but the end user did say he had a sync software running on his iPhone with his desktop. I'm glad I found the problem and resolved it. This by far was the most difficult account lockout I have battled.

I hope this will be helpful for you!

Wednesday, June 02, 2010

What version of Microsoft applications am I running?

I found this link to be very helpful, especially if you're throw in to support an server or app that you don't know much about.


Tuesday, June 01, 2010

Mounting CD-ROM in Ubuntu

I'm somewhat forgetful when I don't use a command every day. While trying to remember the exact command here I did a Google search and each answer I found was missing some step. So I am going to write out the exact steps here for anyone getting lost, and also for my self in the future. :)

This is from Ubuntu so your distribution might be slightly different.

1) sudo mkdir /mnt/cdrom

2) sudo mount /dev/cdrom /mnt/cdrom

3) ls /mnt/cdrom

4) Done!

It's really simple but from searching there were lots of very complex answers, and I prefer the simple as possible method.


Monday, May 17, 2010

The real problem is not always obvious.

Recently at work I helped out with a project to move the IP address of a slave name server. After this move went with no problem the main master name server crashed, or appeared to be down. I wrote up a report on this outage and the steps how I found the real hidden problem using simple troubleshooting steps. Please note I changed the IP addresses and domain names for privacy.

Brief history of the environment

The San Francisco site has two name servers, dns1 ( and dns2 (, both running a flavor of RHEL and Bind version 9. The servers manage the child.parent.com domain and are a slave servers for the parent.com domain managed in Tokyo. In the days leading up to the outage, we added a Nagios client for both servers, and changed the IP address of dns2. The IP address was changed on a Wednesday from to

First problem appears

On Monday, the users alerted us around 6:00am that our VPN site was down. At first this appears to be an network problem but users also found that the external sites were also down. At this point the network engineer verify that the network was up, and all sites were accessible using IP address. The sites are not resolving by name only by IP address from external test sites as well. In fact no external names of the child.parent.com domain could be resolved.

Trying to check the dns1 server, I could not login using SSH and walked to the data center to the console. There I found that the server was filled with messages of the following.

At first I could not access the command prompt, so I restarted the server using the power switch but forgot to record the exact information displayed on the screen. The actual screenshot above was take later. Once the server was restarted, all of the external sites were able to resolve the names again and every thing is back to normal. All 5 external DNS servers are now resolving all of the external sites, we tested this using tools from DNS stuff.

Problem appeared to be solved?

At first we believed the problem was resolved and just a random server crash, but seems too much coincidence to just crash 2 days after the IP address change? I had a good feeling that an IP configuration change would not cause the problem but didn't feel confident that the server was 100% stable.

The first question was why did all 5 external DNS servers fail to resolve any name under the child.parent.com? First this was answered by looking at the dns1 server's Bind named.conf file as shown below. The server dns1 is the master server for child.parent.com, and the setting "expire" is configured for 2 days. This means that if dns1 is down for two days or more, all of the slave servers will erase the child.parent.com domain zone from their records. The value here is 172,800 which is in seconds.

So now we know the configuration of Bind was setup to erase the slave servers after 2 days of not reporting back to the slaves, this must mean the server failed sometime at least 2 days before Monday, most likely Friday night.

While viewing the syslog of dns1 I found numerous SSH login attempts but nothing that specifically reported errors with the system. During the team meeting it was determined by management since the last change was the IP address change of dns2 from to but I didn't think this change would cause another server to crash.

As the day was closing out, I started the research of the logs and delayed the IP change back until the next day when I had everything recorded from the day's problem.

The problem appears again

That night around 12:30am I was working on the report, I had a VPN connection to the office from home, with a SSH connection to dns1 reading the logs on the server and writing the outage documentation. While working on dns1, my connection dropped suddenly. I knew there was a timeout for SSH connections but when I tried to reconnect I could not, also the server was not replying to any ping either. At this point I knew I caught the problem but I also had to drive into work. lol Figured if I didn't go in I wouldn't see exactly what was on the console.

At work, I found the following error message. It's the same as before but I forgot to write it down before, so I took a photo using my phone. Also this time I was able to control-c out the message and access the console.

Ok, now we have a major clue to get started. I started by searching Google.com for the phrase "ip_conntrack table full dropping packet" which came back with many results. The information came back that ip_conntrack was a process of IPTABLES, the internal firewall of a Linux based system. The error message is the server getting overloaded trying to process too many IP packets.

From recommendations I found that the sysctl.conf file can be adjusted to suit the server and tuned for it's physical memory size. The dns1 server has 1GB of physical memory and the currently using the default setting of 65,528 connections. The error message indicated all 65,528 connections were filled so I increased the value to 98,000 and I also shorten the timeout value from 432,000 to 28,800 so connections will disconnect faster. After making the changes I restarted the networking services, and then checked the settings using the following command.

/sbin/sysctl -a | grep conntrack

Right after I checked the settings, I found all of the new entries were valid and working, so I wanted to see how many free ports were available. I used the following command to find out.

wc -l /proc/net/ip_conntrack

I was surprised to find out that all 98,000 ports were now taken up again just seconds after I added them! At this point I knew there had to be something causing extremely heavy traffic on the server. I felt it was a denial of service attack but had no proof. So I began to look at what is the server doing locally. First I did the simple command to find out the top processes and general overview of the server.


Ahh, now we see that there are multiple processes called "ssh-scan" running on the server. Now I wanted to know how many were running. I ran the following command.

ps aux | grep ssh-scan

The result was a listing of over 200 processes of ssh-scan running on the server. At this point I knew that it was highly unlikely that any member of the IT staff would be scanning servers so I was almost certain it was done maliciously. It's important to note that this tool ssh-scan is not like NMAP, where it has legitimate uses, this ssh-scan is just a brute force scanner.

Since I knew the purpose of the tool and what it does I wanted to see dns1's network activity. Running the following command showed me the results.

netstat -n

Looks like it is trying to connect to random servers on the Internet via SSH. Now I wanted to see just activity on port 22, SSH. So I used the following command.

/usr/sbin/lsof -w -n -I tcp:22

ssh-scan 4843 root 7u IPv4 424670 TCP> (ESTABLISHED)
ssh-scan 4846 root 7u IPv4 424767 TCP> (SYN_SENT)

At this point, I was almost 100% certain that there was either a root kit or some malicious process installed on dns1. I wanted to figure out where it was located but my attempts to locate it were not successful. I then started to search Google again for the solution when I stumbled across a posting how to run an ssh scanner. At the following site, the poster showed basic steps to deploy and setup a remote SSH scanner on a compromised server, basically what I have in front of me. Ironically his detailed steps gave me the biggest clues how to find the application.

Here's the link to the forum posting.

Then the important clue from the posting.

then type the following
cd /usr/man/man3/
and then :
mkdir ". hiden"
and then :
cd "..."
This is an hidden dir so the Sysop wont notice

Using the clue that was posted, I guessed the ssh scanning application would be installed somewhere under /usr. I checked around but didn't find anything until I stumbled across /usr/tmp. It was just a hunch but I was thinking where would you hide an important file? In the temp directory. Also I also thought of the movie Hackers and how the file was placed in a garbage directory to be out of sight.

I then ran the command to view hidden directories.

ls -a

Ahh, now we found something, and there's a directory called "ssh-scan"! So what is this application really doing?

Bios.txt - Very long listing of IP addresses
Nfu.txt - Very long listing of random IP addresses
Pass_file - Dictionary attack file
Spd - Script
Vuln.txt - Appears to hold account names, passwords and IP addresses of cracked systems.

The files and directory were deleted but now wanted to find out how this security breech really happened.

Resolving the security issue

I knew that the root account does not have SSH access, so I had to find where else the user got access to the server dns1. From the /var/log/messages I found many entries of "failed password" from various IP addresses from every possible account name from "root" to random names like "paulsmith". I exported the logs and searched the successful logins, here I found there were successful logins from two accounts, "siteadmin" and another "service". Note, the name service is not the actual name.

While looking at the two account, "siteadmin" is the account we use to login here locally, and then SU to root. The account was accessed all from the internal IP address so we knew that this account was not used in the security breech. But the other account "service" was logged in from various IP addresses that we did not own or knew about. The account "service" was created for the local service agent, to monitor the hardware on the server. The problem was how they access the service account and why is anyone logging in from the service account, which should not have remote SSH access.

I removed the access for the problem account and removed login rights by changing the passwd file and giving the service account "nologin". Changed the root and siteadmin passwords, also disable SSH since no one really needed it, the server is just a few feet inside the data center. Digging deeper, we found that the service account was added to the "wheel" group, giving the account SSH remote login access, and that the account did not follow our normal password security standards.

Also to compound the issue, the external firewall was allowing SSH to the servers on the DMZ, which was not known to the other users.

Why and how could this happen?

It's very easy to assume that everything is done normally and there shouldn't be any problems. The biggest problem is that unless you installed the server or deployed the firewall rules, you never really know for sure. I've experienced this many times and at different job sites, it's always eye opening.

A few of the problems we faced were different departments working on the server at the same time. Basically when one group of people access the system, they might not know about some details the other team would know about, for example the server allowing SSH access from the Internet. Another issue was the lack of details everyone knew about the security.

For our department we created the service account and the generic password on the request of another team, but was not added to any SSH access group. Another department might have been troubleshooting the server for another issue and figured they add service account to SSH access, to make troubleshooting easier. Finally since the SSH is open to the outside world, malicious users could port scan trying to get in. The IPS devices were not monitoring for this type of attack so it was not picked up. Then just by the amount of tried, eventual the attacker would find a password that worked and gain access to the server, deploy the SSH scanning tool and start maxing out our server's resources.

It's a bit irony that since the name server was so old, it alerted us to the outage since even a little bit of scanning was overloading the server to the point it would drop off the network.

What to do in the future?

Monitor your logs! I saw the SSH attempts earlier but didn't think that someone would add a very easy to crack password to the SSH access group. Now since I know the dangers of SSH, I disabled it only to turn on the service when needed. Also check the passwords on your system, and make sure who has what access, don't assume everything is the same.

While it was a tough problem to solve, especially since I don't have much Linux administration experience, it was very interesting and using my experience from Windows administration it was a matter of knowing that correct commands. I knew that something was running, but what? Searched for the high process application, then once it was found, needed to know what it was doing. After that how did they gain access, from there I searched the logs and found that IP addresses for one account did not match the other, which was a clue where the login were coming from.

I intend to really learn more about Linux administration, and if this happens again, resolve the problem much quicker. So far, it's a welcome intro to security problems with Linux.

Wednesday, April 07, 2010

Tool review: NMAP

Nmap has been around for a few years. It's a very powerful scanning tool used to check servers or network devices for any open ports and information. You can use this for auditing or security testing. I was first introduced to Nmap while at work, auditing Windows servers.

I found the ability to check a server for open ports handy but not sure where this would be helpful for besides running security scans. It wasn't until much later when troubleshooting applications that I found the port viewing to be extremely helpful to debug a server if it's actually listening on a port or not. Also it was helpful to determine if the local firewall was causing any issues on the server, blocking an application.

Now there are other tools that offer (somewhat) similar scanning but before Nmap I used Sysinternals' TCPView for finding out about a system's ports. The problem is this is a view is not what other systems see, it's from the inside. So Nmap offers a much better real world view of a system.

Another point where Nmap is very helpful is testing connectivity between servers or applications. I recently install Webmin on a server (great tool) but the default port 10000 was not working. Using Nmap I was able to tell that 10000 was being blocked somewhere on the network because it was not blocked on the server's firewall. This is a handy tool when you don't have access to hardware firewalls but want to have at least some evidence that you may be on the right track.

The latest version of Nmap is now a GUI version and it's very easy to run. Honestly I liked the command line version of before but this new version really makes it easy for anyone to use.

Check it out at, available for Windows and other OS.

Thinking too hard for the solution

At work we've been rolling out Nagios agents on the servers, it's simple enough on the Linux and Windows. But the problem was a few servers we had running in the DMZ, an isolated network where their required greater security. A problem started up where certain ports were not working, the Nagios ports, but all that was working was 22, standard SSH port.

Trying to figure out this problem I looked through the system, thinking it might be an internal firewall rule, stopped iptables which no change or result. It wasn't until I used NMAP to scan the servers that I realized all of the ports were closed, expect 22.

At this point I asked the network engineer and he confirmed that yes, there were excessive ports being blocked on the DMZ network.

This brings a very important point, how to troubleshoot and gather information.

First, when dealing with any new problem, alway understand what is your environment. I can't stress this enough. I once helped a friend over the phone with a network problem at a large grade school where she was deploying a network device. The problem was she couldn't get terminal access to the device that she installed on the network. So we ran through the steps, the IP address she was given to use, ping commands, etc.

The problem turned out to be the school administrator gave her a duplicate IP address, and while she could ping the address (which was not her device), she could not terminal into the machine. She assumed that since it was the IP address given to her, it must be working, but in the end it was not. It's important to always double check and know the environment.

This can also be not only trusting the information you are given but knowing what to check when something is wrong. Another example is a problem I experienced once while changing an server's IP address. I followed the company standards for moving an Windows IP address from one subnet to another, also I worked closely with our network engineer who had access to the switch I was connecting to.

After making the IP address change, I could not get access to the network. I saw that I had link lights on the server, but my server could not access the network or reply from a ping. I checked all of the cables, even plugged into another port on the patch panel, nothing. I asked the network engineer three times if the network ports were changed to the right subnet, each time he checked and confirmed. Finally I asked my manager, who still had switch access, if there was some problem I couldn't find. He logged into the switch, and found it was set to the wrong subnet.

The problem in this case is I assumed the network engineer who checked three times, actually checked his work. From this case, it was much harder to check the work since I did not have access, but since I checked all possible connections and problems on the servers, I could confidently say it was not an OS issue. This is important to know when a problem appears and who 's side should fix it. Often it's going to be a battle back and forth on where the problem lays.

In larger companies resolving issues becomes difficult, sometimes the department you work with on a project may be half way across the world. In my work, many of the co-workers are on not local and I have limited access to remote servers. It's difficult but I still use the same skills to know when a problem is from our side or theirs, I inspect the environment and then apply the same knowledge to figure out where is the problem.

In any work you need to know how to fix something, it may be a broken computer to a issue on a project. They are very different but require similar skills, the knowledge of the environment to make a decision.

Monday, April 05, 2010

Installing Squid proxy on Ubuntu 9.10 server

Recently at work we have been battling over the issue of controlling the Internet access for our users. The Internet is basically open to everything and of course in a business environment there's some abuse of the access. Taking a snap shot of the usage using WireShark connected to the core switch, allowing port mirroring we found that Youtube.com accounted for the majority of Internet traffic. Also we found that less than 4% of network traffic was internally routed, this could mean that we had many users accessing the other offices or way too many Internet browsing.

Since Youtube was our biggest website and it was not business related, we got the ok to block this site. But ran into our first problem. If you check the domain name Youtube from any nslookup utility you will find that Youtube IP address spans multiple addresses, and some cross over Google.com and Gmail.com. While we got Youtube blocked, it was slightly difficult and since it was done on the firewall, is not the recommended method.

So, looking around the Open Source world, there might been another solution.

Squid is a well know proxy for Linux that is easy to configure but appears a bit difficult at first. I was really overwhelmed looking at the config file but here I'll show you how easy it is to get running with very little work.

First, I'm going to give the step by step using Ubuntu Server 9.10, which I think is pretty easy to get running. We'll use the packages install which save some time and install any other requirements at the same time.

First part is having a Ubuntu server running, this is the easy part, just takes some time. Remember to have OpenSSH running so we can remote console into the server. Also since this is a server we will make it configure with a static IP address.

1) First, make sure your server is updated.

Run the following

sudo apt-get update (this is fast)

sudo apt-get upgrade (takes about 5 minutes depending on your Internet connection)

2) Then we will set up the IP address with a static entry. First, let's find out which network we need to configure.

Run ifconfig and take note of the network card that reports an IP address, usually it's "eth0".

Edit the following file.


Then add the following information using a editor like VI and then save your work. Remember to save!

iface eth0 inet static
address (enter your IP address here)
netmask (enter your subnet mask here)
gateway (enter your gateway here)

3) Now install Squid from the package, note this will install the requirements needed for Squid as well.

sudo apt-get install squid (takes about 5 minutes)

4) Now, Squid is installed, and we'll go over the important files of Squid.


This file holds all of the important details of Squid, it's very long but we'll only need to edit a few parts.

First, make a backup of the file

cp squid.conf squid.conf_backup

Then edit the file

vi squid.conf

5) We're looking for a few areas in the squid.conf file, so we'll use the search function of VI.

Open the file in VI

vi squid.conf

Then search for the first item "TAG: http_port"

TAG: http_port(then press enter)

For me it's at line 1022. This value changes the setting for the proxy server settings on the browser. Normally it's set to port 8080, so we will change this. Enter in the following after the "TAG: http_port"

http_port 8080

6) Now search for "visible_hostname"


This value just gives an alternative name to the server, I just made it simple and used "proxy". The line was 3399.

7) Allowing access, this is a tricky part.

Ok, now we need to know the subnet you're allowing access for the proxy access. For this example, we'll use (basically addresses from 1 to 255).

Search the squid.conf file for "TAG: acl" It's about at line 425. Go a few pages down until you see some uncommented entries without a # sign and enter your details there.

For this example we'll enter the following.

acl allowhome src

This means allow a new network called "allowhome" from the source address of ""

Remember the name "allowhome" we'll need this later.

Also we'll need to define the access for "allowhome".

Search for "http_access" and once you scroll down you should see a few "allow" and "deny" entries. Enter in the following at the top of the section.

http_access allow allowhome

Now save your file and exit.

Just to be sure it took effect, restart Squid.

/etc/init.d/squid restart

Now from your home computer, point your browser to your server's IP address and port 8080. You should connect as before.

Pretty simple!

Next time I'll post about working with blacklists, blocking certain ports or websites, and Webmin as a GUI tool for administration.

The pros and cons of working with Linux

Recently at work my duties with the Linux servers has been increasing. I'm really excited to work with Linux and gain experience but it's also somewhat frustrating at the same time. So coming from a Windows background I noted some things that are different about Linux, both good and bad points.

Applications need to be complied in Linux

Not all applications but a good amount need to be complied, then run some scripts to have them installed. It's not difficult but when things go wrong, takes some time to figure out, what's really causing the issue. At first this was the biggest problem I had switching over to Linux. Why would a software vendor send out applications that are not even ready to install?

After working with some problematic Windows applications I started to see the light. Many times in the Windows world, we would be required to install some applications that are questionable. Not in terms of they have malware issues but problems in how they were developed, hard coded names, etc. In one application we had many issues because the timeout of a child process was extremely short, so short that any lag in the network caused the entire application to crash. Perhaps in a Linux version we could have seen this in a config file and made the change?

Open Source in Linux application

This is more of a personal preference than anything else. I've worked mostly on commerical applicaitons, for example Microsoft Windows family, Microsoft Exchange, or any other popular applicaiton used in the business environment.

We'll see how well Open Source works for me in the future so far there are some projects it's really helping but also it's somewhat hard to get the correct support.

Friday, March 05, 2010

Testing SMTP

At work there is an issue with the SMTP server that messages are getting hung or delayed. It's alway handy to have these commands on hand to manually test SMTP and just make sure everything is up and running. The problem I'm facing is that I'm able to send the messages but they are getting lost somewhere.

Testing SMTP Server from Microsoft

Another link

Here's the commands all run from a Windows command prompt, but these can be used from a UNIX or Linux as well.

1) open command prompt

2) type "telnet"

3) type "set LocalEcho"

4) type "open 25"

5) You should see the following "servername.domainname.com ready"

6) Type "helo me"

7) You should see the following "250 OK"

8) Type "mail from:test@testdomain.com" where you can enter any address here

9) You should see the following "250 OK mail from test@testdomain.com"

10) Type "rcpt to:email@domain.com" this is who you want to send the message to

11) You should see the following "250 OK recipient email@domain.com"

12) Type "Data"

13) You should see the following "354 Send data"

14) Type "Subject:Your subject" press enter twice

15) Type "Testing" and press enter

16) Press enter, then type a (.) period, then press enter

17) Type "quit"

18) You should see the following "Mail queued for delivery"

The message will appear in the recipient's mailbox, but in my case it works the first few times then does not show up.

Monday, February 15, 2010

Train Signal CBT

Recently I purchased the Train Signal dvd for VMware vSphere. The DVD set arrived very quickly, and inside was a nice DVD case with three DVD's. It's important to note that these DVD's are not protected by any digital media rights or passwords, even any accounts. I purchased CBT videos from another major company and was very disappointed that I could not save these to my iPhone or any device that was not on the Internet.

Basically the DVD's are not standard DVD's but actually DVD media discs containing a Flash menu, with links to the AVI videos. There is also PDF documents which contains notes from the videos. In addition on the third DVD there is video formats for your portable media player with video formats of AVI, and even MP3 version files for your car. It's a really nice feature and this allows you to take full advantage of the lessons any where.

Overall the video quality is done well, I'm on video 5 of 36, with each about 15 to 30 minutes long. The instructor is very easy to understand, little jokes or side stories. I generally don't like with some other instructors, he's much more to the point, but very easy to listen to for an hour without problem.

Once I get farther in the series I'll write more about my experience and a longer review.
Back to VMware

It's been a while since I updated this page. Work and social life has been taking over also starting school this year. I've been tasked with a few projects for the current job where I'm focusing on VMware issues with ESX server. Currently there are no major problems but I wanted to have a solid plan for the future. The biggest hurdle is the smaller servers in the data center.

Many companies have small older servers in the data center for one application, typically something that needs a database or hosts a web site internally. Here at my current job location there's a few servers and they all use very low powered servers, dating at least a few years old. These are prime cadinets for physical to virtual migration. This would also help out getting away from old unsupported hardware. Who knows where to find a power supply for a off built server chassis which the company no longer is in business.

The problem going with the virtual solution is how can I keep the servers up when the ESX server it's self has a problem? I'm currently reading and studying with vSphere, hoping to have a project plan written up soon. Ideally I would like to a system planned out where I can have redundant fail over ESX servers, allowing us to make sure if one system is down does not take down the entire virtual systems.

It's going to take some planning, and understanding of the environment. I'm really aiming towards the simple method, making it the easiest to understand and manage. Considering the environment, this would save space and energy, not to mention support issues since there's just less hardware to fail.

Having been in the role where I was supporting very old hardware, some non-hot swap drives in a RAID 1 configuration, it was extremely painful to take down the server just to swap the drives. Not to say that that's IF you are alerted to the drive failure before they both went down.