I find myself designing, refactoring, improving and optimizing my code as new challenges arise. It’s a fun and engaging process that has taught me a lot in my professional career. I tend to lean on Perl and Python often in my work due to the fast turn around of code and ease of debugging. I use Perl more often than Python, but that is changing.
I find a significant advantage in working with an interpreted language as a data analyst. I recognize that compiled C++ has many performance advantages over the interpreted counterparts, but being able to quickly prototype a module, data model or data miner allows me the agility I need to gain the insight to characterize a system.
Typically I am asked to provide a report on some new concept within days of the request. This is when the data mining really pays off. I have developed a wide variety of data miners, analysis tools and automators that perform passive and persistent monitoring of our systems. The real time system I work on utilizes 2 flavors of Unix, a distro of Linux and 3 different chip architectures. There are a lot of moving pieces with multiple baselines of our product being maintained by a large group of dedicated developers. As the only analyst on the program, I do not have the time to maintain multiple baselines of my tools. An advantage of using an interpreted language is that the need for maintaining multiple baselines of my tools is minimized to a few very rare cases. As long as care is taken in my design, a single script or a series of modules can be deployed on all the systems at once, transparent to the user.
In the upcoming post I will be going into detail about the techniques I use and the interesting things I have learned that help me with my hobbies and work. The next few post will focus on memory “management” in Perl, run-time optimization, data mining techniques and data visualization ideas. Hopefully, you will find these post helpful in avoiding some of the pitfalls I first fell into.
See you soon!