Database App Running Strong for 22 Years? PowerBuilder!

Chances are that if your mission critical database applications are still running after 22 years, they were written in PowerBuilder.

PowerBuilder was exciting in the 1990s because it provided an Integrated Development Environment (IDE) and rich programming language enabling rapid database development.

PowerBuilder is impressive today because code written in the 1990’s still runs perfectly in 2018. Talk about return on investment!

PowerBuilder’s core competency is the construction of database applications for the Windows desktop. These may grow extremely large, yet run smoothly. Integrating with the world of database servers is graceful and the developer has high level tools for dealing with them.

Furthermore, PowerBuilder supports feature rich user interfaces both in terms of visualization and interaction.

For an example of PowerBuilder code supporting a few intriguing UI features, see the following repository on GitHub:

The Hidden Promise of Bots – Accessibility

Though the name Bots may seem faddish, the way users interact with them is a leap forward in both convenience and accessibility.

These are computer programs that one can communicate with via short text messages. They may access a single company’s data or draw from multiple organizations’ computer systems.

An example is the chamber of commerce at a resort town wants to make it very convenient for guests to find food, lodging and attractions.

They might plan that version 1 of such a Bot would be available 24/7 by text message and would help users find food and lodging and learn which lift lines are running.

Version 2 of such a Bot might be made available both by text message and on the town’s social media web page. It also might enable the users to reserve tables and rooms and check the current wait time on lift lines.

Clearly Bots have the promise of being extremely convenient. The fact that short text messages to a single number can be used to do a variety of business makes them dramatically more accessible than requiring users to navigate multiple web sites and customer service phone lines.

Automated Testing is Hard – And How to Mitigate That

Automated testing is inherently difficult.

First, automated tests are themselves computer programming that also must run reliably and correctly. Verifying that the tests themselves work as intended becomes harder as the quantity and complexity of test programming increases.

Second, the need for testing extends not only over relatively narrowly defined functions, but also across systems. Systems may include browsers, front end code, session tracking, user privileges, API’s, databases, back end code and simulating human interaction. Many parts of such systems run asynchronously to each other creating challenges on setting appropriate timeout periods or in identifying signals which indicate when the next step of the test may be taken. Furthermore, successful completion of suites of automated testing may be set as a programmed requirement for program code to be copied from development to quality assurance environments.

Third, developing tests is time consuming. Not only is writing test programs a complex and iterative task like other programming tasks, but running the tests between iterations can take an annoyingly long time which further drags out test development.

These fundamental challenges are mitigated by making automated testing part of the development process from the beginning.

First, the programming team chooses libraries and environments with testability in mind. They look at testing best practices as well as programming best practices.

Second, testing costs are kept at a minimum when early design and programming is done to make automated testing easier.

Finally, early inclusion of automated testing in the development process allows for better assessment of test development capabilities and costs.

. . .

To see 3 examples of automated testing over narrowly defined functions see below. All examples are written by this author as unpaid studies and have the M.I.T. source code license.

1.)  Using Recursion – Sometimes an Elegant Solution

Source Code:,output
See JavaScript panel and scroll to:
* function automatedTests(sourceArray, testArray) and
* // Automated testing

Automated testing clearly enhances value of proposed solution and may be done with few lines of code and no additional library for testing.

2.) A PHP website with Google Sheets as the Content Management System

Source Code:

Data driven automated testing using PHPUnit library. See other PHPUnit tests in:

3.) DOS Batch scripting to make sure ports are open (or closed)

Source Code:

Even the venerable DOS Batch scripting language can be used to do automated testing.

Using Recursion – Sometimes an Elegant Solution

Recursion is a technique where a function gets part of the way through a solution, then calls itself to continue the solution process.

Certain types of problems such as iterating through a directory and finding all the nested folders and documents can be conveniently solved using a recursion.

A caveat however, is that successive recursive calls push the function’s scope onto a stack.  If there is a limit to that stack that is reached by your processing, your program will fail with the classic “Stack Overflow” type error.

So if you choose to implement a recursive algorithm it would be wise to build a test that verifies the algorithm works in the target environments and do a little research to find out how deep nesting of function calls may be in your environments.  I say environments plural because there are many versions of web browsers for example that one’s web app may run on.

Recently, I stumbled upon a programming challenge that intrigued me.  As I started to sketch out a process to solve the problem, it dawned on me that the solution would be quite short using a recursive algorithm.  I’ve changed the wording of the challenge a little in the hopes that others who find and take on the original challenge, will not readily find this solution.

I’ve put both the challenge and my JavaScript solution into a web page where you can open, re-run and even harmlessly modify the code if you like.

To see the challenge and JavaScript code that solves it, follow this link:,output

Thanks for looking!

HL7 with a JavaScript Class

HL7 is the standard that enables healthcare systems to rapidly agree on data transmission format. One may be surprised to learn the large number systems in a given hospital or laboratory that use HL7 to communicate either internally or externally with admissions, orders, results, billing or other records.

The structure of HL7 is intended to be human readable. Typically one will see a message where there are lines of text that begin like:

PID|1|…|…||patient lastname^firstname|…

The lines are called segments and typically the segments are broken into fields with the vertical bar (|).

The first field in a segment is a standard code which identifies the segment type so that the receiving system can know what fields are expected to be found in that segment.

When people talk about HL7 messages, they might say “Last name is in PID 5.1 and first name is in PID 5.2”. Thus fields may be broken into components typically with an up carat (^).

Some segments may need to repeat with additional information. In this case the second field may be a series of numbers, such as OBX 1, OBX 2 etc.

I recently took on a programming challenge asking how might one parse an HL7 message and how might one write an HL7 object class with constructor, get and set methods. Given that, how might one write functions using that class to do a series of operations on the message such as read a field or component within a field, write a field with components, read a field in a repeating segment and finally (not that one is likely to do this in real life) delete the odd segments of a repeating segment type.

The web page (see link below) is a solution to this challenge that I wrote in a day. Actually I finished around midnight and was practically cross eyed by then. In order to get the website and my browsers to work with JavaScript Classes whose support is supposed to be in future versions of JavaScript, I had to use some odd ball directives. You’ll see those at the top of the JavaScript panel on the left. The Console panel on the right contains the output of my test program.

At the time I wrote this web page (July 19th 2016) the website worked under Chrome, Fire Fox, Microsoft Edge on Windows 10, Android or Mac, but not Safari on a Mac or Internet Explorer on Windows. So try the Chrome browser if you get an error opening the web page.,console

Thanks for looking!

String Concatenation Performance in Web Browsers

The coding example below was inspired by a challenge I recently considered:

I’d like a String Buffer. It should be optimized for prepend, append, and read-all operations. The prepend / append should accept variable length chunks of immutable data — assume something like a Java Strings or C arrays. Prepend / append will be roughly 10x more frequent than read. Describe an implementation and why you chose it.

Since the challenge did not specify a language and since my purpose here is to demonstrate useful coding techniques, I went ahead and created a JavaScript test bed which you can run and modify yourself.

The test bed allows one to try different prepend, append, and read-all algorithms to see which require the least time to complete.  It is designed to enable one to easily vary the maximum size of the strings being concatenated, the number of times the strings are concatenated and the relative frequency of Append / Prepend or Read-All.

Further, it includes a self-test feature and demonstrates programming techniques intended to be clear to the reviewer while providing a robust and inviting testing capability.

To run and explore the test bed, open the following web page:,console

What I found while researching string concatenation performance, is that there are posts still on the web from previous years that recommend using special code to improve string concatenation performance in JavaScript.  For example where strBuffer is an array and pre and app are strings,  prepend may be “strBuffer.unshift(pre)”,  append may be “strBuffer.push(app)” and Read-All may be “strBuffer.join(”)”.

What I found during my testing is that there are practically no use cases using current browsers (including Internet Explorer) where the special code code improved performance at all.  In fact the special code generally dramatically slows down performance!  Bottom line: concatenate strings today in JavaScript with the plus (+) operator.

If a colleague issued the above challenge as a sincere work request, before coming up with an answer I might have the following questions:

Is there a known or perceived problem with string concatenation that he or she has in mind?

Are there particular use cases which have this problem? For example, does the problem occur only when customers A and B do processes X and Y as part of their end of month operations?

What is the nature of the programming environment in which the string concatenation is an issue?  For example does the problem occur in code written in C or Java or JavaScript in Web Browsers? 

Where does the applicable code run?  For example is it on our server, a third party’s’ server or a client’s Web Browser?

Is the environment periodically updated by third parties or is its version entirely under our control? 

If one were to write special code to improve the performance of string concatenation, it would be prudent to periodically revisit specific performance testing since web browsers or compilers and interpreters of computer languages are periodically revised and since the computers and servers running the code are periodically replaced or upgraded.  Efforts to improve performance may become outdated and thus can create unnecessary complexity and even slow down an application.


DOS Batch scripting to make sure server ports are open (or closed)

This project is a result of a series of DevOps challenges I have recently taken on.  It builds on two previous projects. (See this and this).

This particular challenge was to:

  • Write a script using any language that takes an IP address and list of ports from standard input
  • Check to see if the ports are open
  • If not, send a command via SSH to the server at the IP address to start the service that makes the port open
  • Output the results to a log file
  • For extra credit let the script run against a server brought up by a vagrantfile

I extended the requirements in three ways:

  1. Add a “Usage” section to the script which is displayed if arguments are missing or are “/?”, “/h” or “-h” for example.
  2. Add the ability to close as well as open the ports. This is to readily support testing if not for completeness sake.
  3. Build a test script that verifies that the solution meets the requirements.

Also in the interest of user friendliness, I followed these principles:

  • Just before running a command that could take time depending on the speed of software, networks or servers, echo to the terminal a line explaining what operation is in the works.
  • In the log file be more terse and to the point unless there was an error.  In that case try to bring in information that can help lead to a solution such as the command being run, its standard output and error output. Try this out by deliberately messing with the name of the Linux command or service embedded in the script.  Where there is no error, check out the sample logs generated by running the test script in the link below.

This challenge is a natural follow on to the previous challenge which brings up a virtual server on one’s personal computer utilizing a vagrant file.  So we’ll take the extra credit by running against that.  See:

Since the virtual server has a website and port 80 is where the website is accessed from the browser, testing the script could have the very tangible result of taking the website on or off line.

The scripting language I chose is DOS Batch. This is handy because I’m running Windows on my personal computer and have used DOS Batch in the past.

In order to begin, I had to find a utility to check the status of ports and ended up downloading and using Nmap. See . With Nmap, I also discovered some additional open ports on the server which allowed me to extend testing to multiple ports.  So in addition to port 80, I also support port 111 which it turns out is easy to open and close as well on the virtual server.

In order to test graceful failing where the script is not programmed to start or stop a service to open or close a port, I arbitrarily chose port 99 which was closed on the virtual server in the first place.

Google and Stack Overflow continue to be my constant consultants.  I used these to determine which commands I would use on the virtual server to start and stop services and which services controlled which ports.   Nmap also provides some intelligence on this as well.

Once I knew how to check a port and make it open or closed from the DOS command line I was ready to begin scripting. To review the language (and quirks) of DOS batch, I read the beginning and other portions of .

I worked through the inevitable bugs in the development process by temporarily:

  1. Commenting out the initial command of @echo off
  2. Inserting extra ECHO commands
  3. Inserting TYPE commands to dump the contents of a temp or log file to the screen

Once I got the main script into satisfactory shape, I capped this project with the automated test script also in DOS batch.

See my solution at:

… and thanks for looking!

Using Vagrant to deploy a server and website

I took on some DevOps challenges that include one to:

Create a vagrantfile that brings up a server with a working data driven website.  The vagrant file is expected to use a configuration management framework such as Chef, Puppet, Ansible, Salt or Docker.

I started by banging out a sample website. See A mini Data Driven site in PHP and MySQL.  I thought that might be the easy part of the challenge.

Next in just a few days, I did an initial dive starting with the getting started guide. I was delighted to learn that Vagrant will not only allow one to configure a real server, but will also allow one to install a virtual server on one’s personal computer.  That simplified the problem in that I would not need to bring Amazon Web Services or something else into the mix for this challenge.

I then did a quick survey of the various configuration management frameworks mentioned above.  I was looking for how to copy files from GitHub using one of the above configuration management software, called from a vagrantfile.  I decided instead to write a Linux shell script function imbedded in the vagrantfile.

I took the approach of doing my own simple configuration management with shell script because I wanted to hit my first goal of bringing up a website from scratch with a vagrantfile as quickly as possible and Vagrant software suggests in a few places to start with a shell script (here,  here, and here).  Also Vagrant has a catalog of pre-configured servers some of which have a ready to go web server stack and GitHub installed as well!  So all I needed to add was script to call GitHub on the server and do some file copying.

I may very well go back and use Docker or some other configuration management software to copy in the website files to my virtual server as a second pass.  Stay tuned.

Conclusion: I was surprised at the relative ease of use Vagrant provides to deploy a server with a single configuration file on one’s personal computer.  I know I just scratched the surface with Vagrant.  However if you’d like to try out Vagrant and bring up a website with a single vagrantfile and the command “vagrant up”, check out my solution at:


Trouble Shooting Section:

On 2/4/2016 after a reboot, running vagrant up in my test folder failed to make my web page available as expected at  Uninstalling and re-installing vagrant seemed to resolve the problem.

A mini Data Driven site in PHP and MySQL

I’ve been investigating some DevOps techniques – and for testing needed a website that Created, Updated, Deleted and Displayed data. I decided to write a minimal sample website.

Here’s the design:

Mini Data Driven Website

  1. Use a single index.php file
  2. Display an input for a “Key” and a “Value”
  3. Display a Submit button
  4. Display a table of existing keys and values
  5. Submit button posts to the same php file
  6. Index.php does the following before returning any text
    • Login to database
    • Retrieve posted key and value data
    • Special case: if key=”drop” and value=”table”, drop the table and exit
    • Create the empty key value table if it does not already exist
    • If key is supplied and value is not empty, check for pre-existence of key and either update the corresponding value or insert the key value pair
    • If key is supplied but value is empty delete matching key value pair
  7. That’s it!

Here’s a link to the source code:

Here’s a link to the working site:

Exploring USPS and Google Address APIs

Quick Links:

Go To U.S. Postal Service Address Lookup Page

Go To Google Address Lookup Page

Objective: As part of the “Build 14 Websites…” course exploring the Google Address API, we were challenged to do our own address lookup page.

However, I first explored the U.S. Postal Service (USPS) Address lookup API and created a web page for that before proceeding with Google.


1.)  Both web pages support test data.  Since both USPS and Google can charge for or restrict use of the APIs and since test data enables efficiently exercising features without repeatedly hitting the API, test cases are important.

In the USPS version the test data is embedded in the code.  In the Google version, however the test data is saved in files which I think is an improvement.  Enter “test0” through “test3” (in the Zip Code field for USPS) to access the test cases.

2.)  Both USPS and Google APIs may return a postal code suffix field.  The absence of this field is an indicator that the address is likely not deliverable by the U.S. Postal Service.  Perhaps there is no house with that number or in some rural communities the Postal Carrier may not service the entire road.  In both pages, if the suffix was included, I added it to the display for U.S. addresses or indicated the address may be undeliverable.  During testing I found an address I know is deliverable by USPS for which USPS returned the postal code suffix, but Google did not. In the U.S. the postal code with the suffix  is commonly known as “Zip Plus 4”.

3.)  Both pages use PHP as a way to keep the API User IDs private.  PHP also has handy functions for loading test data.

4.)  Some Google API specific features:

a. The page using the Google API, handles retrieving multiple addresses for a search.

b. The found addresses are links to Google Maps.

c. Hovering on the link showed information about locale like “Town, County, State, Country”.

Source Code: (See address-checker folder)

Web pages:

The intention of this example is to show initiative in exploring alternate solutions.  It shows consideration of live APIs by moving test cases to self hosted test data. It also shows observance of data and a drive to discover and utilize valuable features.