Making Nice Looking Diagrams in Linux

I and my fellow systems and network administrators, architects, and other technical solutions providers who run Linux have been looking for a solid Microsoft Visio replacement for a long time. Visio and the like are used by techies for environment or process diagramming.

We have this in Dia and it does, indeed, get the job done fairly well. The problem is, among other minor quirks, the output isn’t beautiful. A big reason for that is the icons, often called stencils or shapes, representing various environment objects — servers, switches, firewalls, and so on — are pretty ugly.

This is why I’m so excited about the latest version of Calligra Flow (v2.8.1). It is quite refined, easy to use, and — most of all — now supports SVG format stencils. As soon as I found out about this, I started searching around for free SVG stencils to import.

Very big thanks to Open Security Architecture (OSA) for providing their icon library free under the Creative Commons.

Step 1. Get Calligra Flow. You must have version 2.8.1 of Flow to import SVG stencils. If you have OpenSuSE, version 2.8.1 is in the KDE:Extras repository that goes with KDE:Current or newer.

If you have Kubuntu 14.04, simply use apt-get. If you have an older version of Kubuntu, now is a good time to upgrade but you can probably also use a PPA.

sudo apt-get update
sudo apt-get install calligraflow

Step 2. Get the stencils. Download the stencils from the SourceForge project I set up.

Step 3. Install the stencils. I set up the SourceForge project to make this whole thing as simple as possible. Just unzip the files and drop the “NiceNetwork” directory into ~/.kde/share/apps/flow/stencils/ (Kubuntu) or ~/.kde/share/apps/flow/stencils/ (Suse). If there is any doubt about where to put the NiceNetwork directory, run Calligra Flow and click that green “+” at the top of the stencil box — it will take you there.

After you paste the NiceNetwork folder where it goes, restart Flow and you should see “Nice Computer/Network” in the list of available stencils

Voila! Now you have nice looking stencils to make professional looking diagrams in Linux!

Generate a Self-Signed SSL Certificate on Linux in Four Lines

This is a really simple task but most tutorials for it drone on forever. Here’s all you really need to know.

You can copy and paste these below lines into your command console. Replace “test” with your domain name, e.g.

1. Generate the key:

openssl genrsa -des3 -out test.key 2048

2. Remove the password for the key:

openssl rsa -in test.key -out test.key

3. Generate the certificate signing request (CSR):

openssl req -new -key test.key -out test.csr

4. Generate the certificate:

openssl x509 -req -days 2000 -in test.csr -signkey test.key -out test.crt

Done! You are now ready to set up the certificate on your web server or load balancer.

A word on self-signed SSL certificates: Such certificates do not authenticate the owner of the site they are on, which is part of the point of SSL. For this reason, web browsers will warn site visitors the SSL certificate is not signed and not necessarily authentic/secure. So self-signed SSL certificates are mainly used for testing or for internal sites not available to the public.

Installing keepalived on CentOS With Unicast

NOTE: This article is a work in progress. It’s up early in case it might be useful to people who need it in the meantime.

We often have a need for fail-over on our mission critical environments. This is often accomplished with some sort of floating IP address or clustering. keepalived is a fantastic tool for facilitating this. Fail-over for the totally amazing HAProxy is a common use case for keepalived.

Like most similar solutions, however, keepalived is normally used in multicast environments and there isn’t much information on how to deploy it in a network that prohibits multicast (as many of cloud providers do). Hence the impetus for this article!

You can set your two machines up individually or use a neato tool like tmux to set them both up at the same time. The configuration files are slightly different between the two servers and I’ll explain that as one of the final steps.

Add this line to /etc/sysctl.conf:

At the command line, run:
sysctl -p
To load the edited file.

The keepalived package available from the CentOS 6.5 yum repository is 1.2.7. Unicast wasn’t a part of the main keepalived code base until around v1.8. So we need to compile from source to get the latest version (1.2.12).

First, download the tarball:

Unpack with ‘tar -xvzf’ and compile with the standard parameters:
./configure && make && make install

*NOTE: I had to move some files around after compile to get things to work properly. I’ll set up a test environment to remind myself which files I had to move and where I put them, then update this article.*

The configuration file (/etc/keepalived/keepalived.conf) should look something like this example below. Changing only the IP addresses and interface names as needed should be all you need to get running. The vrrp_unicast_bind and vrrp_unicast_peer parameters are the ones that make the unicast magic happen. virtual_ipaddress is your floating IP address that will be re-assigned in case the current master becomes unavailable.

vrrp_script chk_haproxy {
script "killall -0 haproxy"     # cheaper than pidof
interval 2                      # check every 2 seconds
weight 2                        # add 2 points of prio if OK

vrrp_instance VI_1 {
interface eth1
state MASTER
virtual_router_id 51
priority 101                    # 101 on master, 100 on backup
vrrp_unicast_bind   # Internal IP of this machine
vrrp_unicast_peer   # Internal IP of peer
virtual_ipaddress {
track_script {

Once you’ve edited the configuration file, it’s time to fire up keepalived:
/etc/init.d/keepalived start
service keepalived start

Use the following at the command line to see what IPs are active on a given interface. Substitute the interface name you want to know about for ‘eth1′.
ip addr sh eth1
Hopefully, you see the primary interface of the node you are on as well as the floating IP!

Before you call it done, go ahead and set keepalived to automatically start at boot. Do something like:

ln -s /etc/rc3.d/S91keepalived /etc/init.d/keepalived
ln -s /etc/rc3.d/K91keepalived /etc/init.d/keepalived

You can substitute the S91 and K91 for unused numbers in /etc/rc3.d/. Of course, just ls /etc/rc3.d to see what’s in there.

Now you have fully redundant servers with keepalived. No multicast needed!


Install Latest Varnish Cache From Source

Varnish Cache is rad.. but the packages available on the repositories of most platforms are for older versions. Never fear, compiling it from source is piece of pie. Read on and give it a shot yourself.

Update: @ruben_varnish reminded me via Twitter that the Varnish people keep a repository for RHEL and CentOS at Swap that “6” for a “5” if you’re running RHEL/CentOS 5.x.

Grab the code (substitute for the current file name found at

Install the dependencies if you don’t have them already:
yum install -y pcre-devel gcc

Unpack, compile, and install the code:
tar -xvzf varnish.tar.gz
cd varnish-3.x
./configure && make && make install

Add a Varnish user
useradd varnish

Download this configuration file, edit it to your needs, and move it to /etc/sysconfig/
The main thing to adjust here is the memory / size of the cache on the line beginning with ‘-s’. Read a word from the Varnish developers on that to guide you.
wget ../varnish
vim varnish
mv varnish /etc/sysconfig/

You’ll need to edit the VCL file at . Go here for an explanation of how it’s configured.

You can use this init script to easily start, stop, and restart the service. Just copy it to /etc/init.d/ on your system.
wget ../varnish
mv varnish /etc/init.d/

Fire it up and test it out:
/etc/init.d/varnish start

If you have trouble, you can try launching Varnish from the interactive console to see if the problem lies with your init script (or items it points to) or something else:
varnishd -f /usr/local/etc/varnish/default.vcl -s malloc,256M -T -a
This page explains what all those parameters do.

Varnish has a quite different way of logging, designed for speed. Here is a reference for that.

The Varnish documentation can be referenced here as well:

Set Up LVM on Software RAID in Ubuntu Installer

Software RAID can be pretty confusing, especially when you are accustomed to dealing with hardware RAID like I am. Adding to the confusion is LVM on top of software RAID. A big key to understanding how this works and how to configure it is that with software RAID, partitions comprise the array, whereas with hardware RAID, it is the physical disks that comprise the array. So we are taking two physical disks, partitioning them for software RAID, then creating a RAID array and adding these “low-level” partitions to the array, then creating an LVM volume group, creating LVM volumes and adding them to the group, and finally creating partitions and file-systems on those volumes. Once all those steps are complete, the operating system can be installed.

Got all that?! It takes a moment to get your head around how this RAID method works but once you do, you’re on your way and things are much less confusing. If you haven’t got all that, don’t worry — I’ll take you through each step right here with screen-shots.

These instructions are for RAID 1 using the server image of the Ubuntu installation media though they can be adapted for other RAID levels (such as 5) and/or the desktop installation media. I’ve tested these instructions on versions 12.04 and 12.10 of Ubuntu.

Get started:

    • Install two drives in your system. Preferably, both drives are the same size. I am using a pair of two terabyte SATA disks.
    • Complete the initial steps of the install process as you normally would.
    • When you arrive at the partition screen, select manual partitioning.


    • Select the first disk (actually the line under it representing the partition) and create one partition that takes up all the space on the entire drive. If these drives are brand new and have no partions, you’ll get a prompt asking, “Create empty partition table on this device?” Say yes.
    • Designate your new partition for RAID bur selecting “Physical volume for RAID” at the “How to use this partition:” prompt. This process will create a new RAID device.


  • Repeat the previous step for the other physical disk.

Here’s the overview of my partiion layout and settings:

Next, enter the LVM configuration:

    • At the prompt asking, “Write the changes to disks and configure LVM?” Select yes.
    • Create an LVM volume group on the new RAID device (/dev/devname). Give it any name you wish. A prompt will appear asking which devices should belong to the new volume group. Select both devices by pressing [space] as shown. Again, you’ll be asked, “Write the changes to disks and configure LVM?” Select yes.


    • Create an LVM volume in your new volume group. I typically create a swap volume first and name it “swap.” Here I am setting the swap volume at 16GB.


  • Create an LVM “root” volume. Here, I normally create one volume that consumes the remaining space on the drive. If you’ve already created a swap partion at this point — or don’t want one — you can simply select “continue.”
  • Take a quick check at the LVM summary screen to verify you have the right number of everything. In my case shown and described in these instructions, there should be two used physical volumes, one volume group, and two logical volumes. Make sure your screen shows the desired result then select “finish.”

We’re almost there! Next, partition the swap volume:

    • Go to the LVM volume in the normal partition screen as shown. Set the filesystem type (under “Use as”) as swap.


  • Partition the root volume. Set filesystem type as ext4 (or whatever you prefer) and the mount point as ‘/’.
  • Now you can write these changes to the disks and continue the OS installation.

Voilà! You now have a fully redundant and performant RAID array, without the expense of a fancy hardware controller. Enjoy!

Weird keyboard problems on Thinkpad Edge

My main home machine is a Thinkpad Edge. After just over two years, the keyboard started acting very strangely. Basically, it would type different letters or symbols than those on the keys pressed. Pressing some keys would result in a bunch of different characters appearing on the screen.

There is a post on the Lenovo forums about this ( It includes some of the remedies people have tried. A particularly clever poster, who goes by the handle, jktroy, found the source of the trouble.

The problem

Basically, the design of the laptop is flawed such that the ribbon cable that connects the keyboard to the main-board is pinched under a bracket for the touch-pad. This pinching over two years — probably increased by leaning on the wrist-rest and touch-pad areas — caused the touch-pad bracket to crimp and scrape the cable so that the exposed metal bracket touches and shorts the bare wire of the cable.


The solution

To solve the problem, place a small piece of electrical tape over the bare metal as shown in image 3. To get to it, you’ll need to partially disassemble your Thinkpad — this is not as hard as it may sound. You only need to take up the wrist rest part in order to access the bit that must be taped. It might be helpful to refer to the maintenance manual for your model. They can be found at .  I didn’t use a manual but I’ve done this sort of thing before. So it really depends on your comfort level too. By the way, the purpose of the tape is to insulate contact between the metal pice and the now bare wire, preventing contact, which causes the keyboard weirdness.

Do your best to avoid pulling on any of the ribbon cables on your computer. If you pull one, no big deal, just (very gently pull up on the locking mechanism in the front to unlatch the cable’s end, put the cable all the way back in, and latch it down again. If you continue to have trouble with your keyboard, pointing stick, or touch-pad and you’re quite sure you’ve applied the tape correctly, try re-seating your ribbon cables.

Here is the underside of the touch-pad before application of electrical tape.

Here is that same area with the tape applied. It’s difficult to see but the tape is the black square on the left.

Hope this helps anyone out there who’s run into the issue.


Disclaimer: I take no responsibility for any problems arising from following this article. Take apart your laptop at your own risk!


Fix Broken Display After Ubuntu 12.10 Upgrade

After updating my trusty Lenovo X100e to the latest version of Kubuntu (which is actually presently in the final pre-release), my display resolution was locked in at a very sub-optimal 1024 x 768 and I couldn’t change it.

What happened: It appears some FGLRX (ATI proprietary drivers) packages were installed or changed during the upgrade process and were not functioning properly. Since I’m not a big gamer and this machine really isn’t made for graphics-heavy games anyway, I just removed the driver packages. After that process was complete, I rebooted and my sharp, high-resolution configuration was back!

Here’s how it’s done via the command console:

sudo apt-get remove –purge fglrx fglrx_* fglrx-amdcccle* fglrx-dev*

That should remove all the FGLRX packages. I just rebooted and went on with life from there since I can do with the default video drivers but if you want to re-install the proprietary ATI drivers again, the following should get you there:

sudo apt-get update && sudo apt-get install fglrx


These actions work for Ubuntu and all or most variants (Kubuntu, Edubuntu, Xubuntu, Lubuntu, etc) as well.

For more information on the ATI binary drivers, see the Ubuntu wiki.



I’m running for the CIRA board of directors

I am a “member nominee” in this year’s CIRA board of directors election. Together, we will make the Internet in Canada — and the world — awesome.

I need your support. You can help by expressing a show of support on the CIRA web site:

Below are the contents of my CIRA board of directors nominee application form, pasted verbatim. More information on the election is at A copy of my application can also be found on the CIRA elections site.

My name is Mike Toscano. I believe in a free and open Internet for all Canadians.

With your support, we will proudly continue Canada’s journey to bring the Internet to new heights as an incredible vehicle for business, connectivity, information, and expression for everyone.

There is a lot of information about me, my qualifications, and my views in the responses below but here is a bit to get you started:

* I have a strident belief in a free and open Internet with equal access for all. My firm support of positions expressed by CIRA CEO, Byron Holland on infrastructure, privacy, performance (, and access ( serve as examples of such.

* My technical knowledge with regard to Internet technology is expansive and comprehensive. I’m a geek, through and through. I’ll contribute to the strong, up-to-date understanding of the technical issues on the Board, and ensure initiatives are relevant and have technical merit.

* I know how organizations like CIRA work. I also know a lot about business. My knowledge and skills in these areas would help make CIRA exceptionally efficient and well managed, with a laser focus on the needs of its constituents — you! As well, I would help CIRA stay on top of issues that matter most to Canadian businesses and people across the country. In cases where there is tension between the needs of business and people, you can bet on me favouring people every time.

After reading my responses to the questions below, please feel free to read more on my blog at, or my LinkedIn profile at I’d love to hear from you via e-mail, Twitter, or the CIRA elections message board on what you’d like to see in your CIRA board members, your thoughts on the issues, and of course, any questions you might have.

Thanks for your consideration,

Twitter ID: @mike_toscano
E-mail: mike4cira [at] miketoscano [dot] ca

CIRA Board of Directors nominee application questions:

1. Why do you want to be on CIRA’s Board of Directors?

It is my aim to do everything I can to have the greatest possible positive impact on the world. I can be most effective in this pursuit by leveraging my skills and experience, which are in the realms of technology, business, and public policy.

CIRA plays an important role in the development of the Internet, and by extension, in the development of Canadian business and society. I want to do my part to help CIRA be its very best, as well as to ensure it operates firmly in the public interest.

My simple, yet admittedly ambitious goals to make people’s lives better and the world a more just, intelligent place are why I have pursued my degrees, launched my technology business, and why I have decided to run for the CIRA board of directors.


2. What specific skills or experiences do you have that make you the best candidate for the CIRA Board?

* Over 13 years experience in information technology, the last five of which have been completely focused on Internet services (such as HTTP, databases, load balancing, and firewalls for web sites) for high traffic sites of some of the best known brands. I have incredibly comprehensive knowledge in technology relevant to CIRA. What’s more, I truly love technology and see it as an enabler — empowering people to communicate, learn, and organize to make their lives and the world better.

* As an Internet and technology professional, I have worked with many companies, large and small. With this experience and my business education, I understand well, how business works and the needs of new, small, and growing firms as well as large, established ones. This understanding helps me promote business and provide an environment for them to thrive without encroaching upon the needs and rights of individuals, which are paramount.

* I have outstanding communication and people skills. I like people and work well with others. If elected, I will foster positive change and innovation through healthy discussion, debate, and cooperation without “shaking things up” or driving other board members and staff nuts.

* I am a divergent, strategic thinker. I cultivate environments where ideas flourish and I’m pretty good at coming up with fresh, creative ideas, myself. One of the most exciting products of the Internet is the generation and proliferation of thought and ideas. A great example of this is the open source movement. It is through cooperation, collaboration, and unfettered exchange of ideas (and constructive criticisms) among smart people all over the world that powerful, disruptive projects like Linux, OpenStack, and Hadoop have been developed, improved, and distributed. Together with the Canadian Internet community, CIRA can build a vibrant ecosystem of creativity for tackling Internet issues. I would love to utilize my skills to help make that happen.

* I have a Master in Business Administration (MBA) degree from University of British Columbia — one of the world’s top universities — with a specialization in information technology and sub-specialization in marketing. In the process of earning this degree, I have learned a great deal in all areas of business – accounting, finance, economics, entrepreneurship, corporate social responsibility, and much more. As a board member, this knowledge would help provide context behind our initiatives, as well as enable me to better communicate and understand the perspectives of other .CA members, CIRA board members and management and other stakeholders – inside and outside of CIRA — who come from varied backgrounds in government, business, and non-profit sectors.


3. What do you feel are the top three challenges and opportunities facing the .CA domain name space during the next three to five years?

First, we must ensure the Internet remains open, accessible, and free (as in freedom) for all Canadians. We, in Canada, have a significant role to play in shaping the Internet at large as well and our .CA name space is an important part of that. Unfortunately, attaining and keeping an open, accessible, and free Internet will always be a major challenge because more than a few powerful groups in the world have too much to gain by locking up and stifling it.

We can achieve this fundamental goal by maintaining CIRA as a strong, democratic, independent, and credible institution that operates squarely in the public interest. Electing responsible board members with solid knowledge on how the Internet and registries work, CIRA’s role in shaping the Internet coupled with equally solid commitment to Canadian principles and values of freedom and justice is essential. There is, perhaps, nothing more important to me than human rights. I would stand firmly in the way of those who would attempt to use the name system to silence or censor expression on the Internet. Moreover, I would work to maintain CIRA’s solid corporate governance structure and ensure all activities and elections continue to be conducted with complete transparency and integrity in the interest of all Canadians.

A second major challenge for CIRA and all Internet organizations is security. As we enter a new data-driven age powered by computers and the Internet, more and more information is stored and activities take place digitally by people, businesses, and governments than ever. This phenomenon will continue at an incredible rate. While these advances in technology great enablers in Canadian society, their vulnerabilities pose great risks and threats. We have seen several examples of such with security issues identified in DNS in recent years as well as a new breed of incredibly sophisticated powerful viruses, trojans, and worms – some even likely sponsored by nation-states (Stuxnet, Flame). Herein lies an opportunity to leverage innovation and excellence — such risks can be controlled through proper process, public policy, defensive technologies (like DNSSEC), and rapid, nimble response. Public organizations often perform reasonably well at the first two items mentioned but the pace of the Internet today requires a nimble, well informed group able to quickly react, anticipate, and take advantage of changes in technology, risks, and disruptive change. A sharp, technologically astute board will be able to foster innovation to address these issues.

Finally, CIRA’s aim to sustain and increase the .CA domain’s prominence and relevance as the Internet continues to grow at break-neck speed and with the introduction of many more TLDs (top-level domains) will become more challenging to achieve. As your CIRA board member, I would bring my skills in marketing, public speaking, and writing to help raise the profile of the organization, and the .ca TLD. As well, I would do more to foster engagement in the Internet and business communities as I see such engagement as vital to raising awareness and in running any entity in the public interest.

CIRA has already done a fantastic job in all three of the aforementioned areas but the challenges presented will become even more substantial as time passes, requiring renewed commitment and resolve. I am eager to tackle these and all of the challenges we can look forward to facing in the future.


4. What specific actions do you propose to overcome one or more of these challenges and opportunities?

I have woven actions, general an specific, into my responses on each of the challenges and opportunities mentioned. If you would like me to expand on or provide more detail, please feel free to contact me.


5. Please describe your understanding of the role of a Director on CIRA’s Board.

Like any board of directors, CIRA’s board serves as a primary form of governance of the organization. This means ensuring CIRA is accountable to its stakeholders and operates according to its bylaws and other applicable regulations.

The board provides general direction to CIRA and most of all, supports and provides guidance to achieve the corporation’s vision of being a world leader among country code top-level domain registries and to make .CA the TLD of choice for Canadians.

As an individual, I would consider my place on the board one not only of fulfilling these fiduciary duties outlined above but also in doing the best I can to help the board be efficient, effective, and a valued resource for information and guidance to CIRA management and the Internet community at large.

Wireless Site Survey With Free Tools

Between characteristics of modern buildings (block walls, walls with metal studs, cement floors, and the like) and the large numbers of wireless networks assailing the airwaves, setting up a reliable wireless network can be a real challenge. Site surveys — where technical architects / network administrators examine a given physical environment’s suitability for wireless networks — can really help identify potential WiFi issues.

Unfortunately, many of the tools traditionally employed for performing wireless site surveys cost thousands of dollars. Not to worry! Here, we’ll discuss how to perform a wireless site survey for 802.11 networks using free open source tools so you can build a rock-solid set-up, regardless of budget. This article focuses on the tools, rather than the process of WiFi surveys. For information on the process, check the links at the bottom of this article.

WiFi Analyzer is a tool that basically turns your Android phone into a spectrum analyzer. With it you can easily see what access points are nearby, the channels they are on, and their signal strength — all through clear, colourful real-time graphs. This is one of the fastest and easiest ways to see what’s going on in the airwaves near your home or office and how to avoid interference on your network. WiFi Analyzer can be found on Google Play ( for free (the program is ad-supported).

To take things a step further, you can break out Kismet, a powerful wireless utility that can not only do all of the above but possesses an array of capabilities for wireless security auditing as well as intrusion detection. Kismet is in the repositories of several popular Linux distributions and you can download the source from as well. The links page of project web site also includes a link to a Windows port of the front-end to Kismet. If you just need to use the tool occasionally and don’t have a Linux machine handy, I recommend using a Linux live CD or VM. Heck, maybe you can use this as an excuse to take the plunge into the awesome world of Unix.  ;)

To use Kismet for a simple wireless survey, you really only need to use a few of its features. Let’s go through running Kismet for this purpose, step-by-step.

* Launch Kismet as root. If you are using Ubuntu, type “sudo kismet” at the command line. If you are using pretty much any other Linux distribution, become root by typing “su -” [enter] at the command line. Then type “kismet”.

* After pressing the space bar to dismiss the introduction message, we are presented with the list of networks found so far. As Kismet is a passive discovery tool, it will find more networks as time passes and it observes traffic moving across them.

* Pressing “h” brings up the help window, which explains commands and what the items on screen mean. We’ll go through most of those relevant to wireless auditing here to make it easy for you to get started.

* A quick check to look at first is the statistics window. Press “a” and it appears, presenting a nice high-level view of what Kismet is detecting – number of networks, packets transmitted, maximum packet rate, and the all important channel usage. There is even a nice graph showing the concentration of APs on each channel. A table with exact numbers of APs on each channel is to the right. With this, we can get most of the information we need to see how crowded a given area is with wireless access points and what channels everyone is on. If you need to dig deeper, read on.

* Sort the results by typing “s.” Then select how you would like them to be sorted. I usually sort by channel when doing a wireless survey. You actually must sort in some fashion in order to actually navigate the list of access points (APs).

You may see an item in the list labelled “Probe Networks” (often marked with a “G” in the network type (T) column because they are in a group, otherwise, they are labelled as the “P” network type). This shows wireless clients in range attempting to access networks that may or may not be in range. So they are not really relevant in a wireless audit. The probe networks detection feature is more useful for security auditing. It can reveal information about networks that are intended to be hidden, among other things. To see these networks, highlight the Probe Networks entry and press the space bar.

Other common network types are ad-hoc networks (designated by an “H” in the type column), and access points (designated by an “A” in the type column). Of course, APs are the type of networks you should be paying particular attention to. Ad-hoc networks are typically of less concern in wireless surveys because they are usually temporary.

There you have it! With WiFi Analyzer and Kismet, you can perform a very effective wireless network survey without spending a dime (as long as you have an Android device already). Once your survey is complete, chose the least crowded channel available.. It’s best to chose one that is farther away from occupied channels. For example, if other networks are on channel one and five, it is best to set your network to channel three, if it’s open. Then, you should have a relatively interference-free connection to your network. You can often check signal-to-noise ratios on your AP (especially if you have an AP running DD-WRT. See Kismet also reports noise but it always seems to be 0 when I check it, which is not right.

More information on wireless surveys and the tools covered here are available via the links below.


WiFi survey process links:


WiFi Analyzer Google Play page:

Kismet project page:

Picalo: An Open Source Competitor to ACL and IDEA

Reprinted from my post at the now defunct IT risk management blog at UBC Sauder School of Business.

I in BAIT 512 in the Sauder MBA program, I saw the references to ACL in the syllabus for data analysis in auditing. It mentions we have to go to the computer lab to use ACL because it is a (very expensive) commercially licensed product. Being the open source person I am, I thought to myself, “What a shame. I bet there is at least one open source package that does the same thing really well for free.” After all, the number of outstanding open source security and network auditing tools out there is enormous (Nmap, tcpdump, netcat, Wireshark and others spring to mind). Lo and behold, a query to Google instantly brought me to Picalo.

Picalo is a really well done Python GUI application for Mac, Linux, and Windows that does (as far as I can tell at this early stage) everything ACL and IDEA do and it includes some things those programs might not have such as a library of pre-written scripts for a variety of types of analysis, and a Python framework for writing your own scripts (rather than using some specialized language that only works with one application). Picalo is very well documented with lots of tutorials and information on the applications operation and internals. Developers can also download and use Picalo’s Python libraries as an engine for their own applications.

I downloaded the Python source for Picalo and ran it on my laptop (an Ubuntu Linux machine) and was able to get to work right away using sample data and the library of analysis scripts. Users of Mac and Windows will also be able to get ip and running quickly by using the available installation packages for those platforms.

It was easy to see how powerful a tool like this is for identifying fraud, inconsistencies, or anomalies in an organization’s records. There is a lot going on here with such a capable tool-set and one could probably spend a lot of time learning the ins and outs of it. I haven’t tried this yet but it appears you can even use Picalo as an interface to a running database and run queries against it. That said, Picalo makes finding errant payments to vendors, suspect withdrawals, information gaps, and the like much easier and efficient than they would be using a manual or spreadsheet-driven technique. If you have any interest in IT or financial auditing, I highly recommend taking it for a spin.


Links: – Main Picalo project page – Picalo introductory manual – Download Page – a brief Wikipedia article on auditing tools – my blog, contains other articles on auditing tools as well.