Tag Archives: Squidhead

Squidhead's Latest Features

I added a huge new feature to Squidhead, which I think deserves some mention. I’ve added another that’s not as cool, but still worth mentioning. I think I’ll try and get it published out to Squidhead users in the next few days.

Feature: Paged and Sortable List views

Basically this feature adds a paging view to the list view, and that view is sortable by each column in the shown in the view. The compelling part of this feature is that it isn’t doing the sorting and paging in ColdFusion, it writes a stored procedure that can present a sorted query of a particular page size. Because this happens in the database it’s blazingly fast.

Writing these types of stored procedures is really annoying. SQL 92 specification prevents variables in the LIMIT, TOP, and ORDER BY clauses. In order to pull down paged lists of records from the database requires adding an extra temporary column that stores row count and only including particular ranges. Then to do order you have to do a switch case block for every order by possibility. It’s the annoying, repetitive code that was designed just so someone could write a code generator to automatically write it.

Feature: List View Filter

The list view filter feature allows you to configure what columns get displayed in the list view of your generated applications. One of the first things that I do when I go to turn a generated application into a real application is to start manually limiting which columns get shown as inevitably there is a text field that needs to go. This can now be done by simply altering the table.xml that is generated by Squidhead, and setting the excludefromlistview property to true.

Northwind and Sakila support

This isn’t really a feature per see, but I’ve started testing against Northwind for MSSQL and Sakila for MySQL. Both of these are sample databases provided by the vendors. They expose a wide range of database features for each one. By doing testing against them, I found a number of errors and bugs in the Squidhead code that have been rectified. Going forward, Squidhead features will not be released unless they pass these tests.

Like I said earlier, I’d like to get these features out in the wild in the next few days.

Working for the Obama Campaign

I got an opportunity to volunteer for the Obama campaign and donated some custom web application building to the effort. In case you’re turned off by politics, this post isn’t going to be about politics, it’s more about the environment and technical challenges that I experienced. Finally, I want to make it clear that I am only claiming a teeny, tiny part in the effort. The campaign was won by a lot of people working a lot harder for a lot longer, they deserve a lot of respect, even if you don’t agree with them.

About three weeks before Election Day a call for volunteers came my way from a co-worker. The Voter Protection division of the Obama campaign in Pennsylvania needed someone who had experience working with databases.

After talking with them for awhile we distilled down their problems. They had about 6000 volunteer lawyers willing to work 9000 polling places to protect voters from various threats (some of it malfeasance, but more normally a failure of someone to grasp the full set of election law as it pertains to a particular voter.) They were making these assignments by combining information from a central web application with information collected in the field. They were using copies of an Excel spreadsheet they made every night for in-the-field collection. The guy I was working for had to make that spreadsheet every night, and it was obvious that he needed a better way, but he couldn’t just drop what he was doing and knuckle down and do it.

I, having no other responsibilities to the campaign, could.

In the end, from an application standpoint it was pretty basic: 2 tables, 1 linking table for the many to many relationship (multiple volunteers could be at multiple poling locations.) I pointed Squidhead in fkcrazy (foreign key crazy) mode and it did all the work for me. I had to do some custom tweaking of the app, but I tried to do as little of that as possible because I knew that there would be schema changes. I just had that sense. Then over the next few days I responded to various and sundry schema changes (I told me so).

It was a great experience, mostly because as a small database driven application with many schema changes and not a lot of custom interface work, it was the perfect use case for Squidhead. It reminded me just how useful code generation can be.

I finished up work for it in time for the last two weeks of the campaign. The local volunteer staff would use my application to make assignments. Then before Election Day they would upload their work to the central campaign to integrate with the incident tracking system they had.

Over the next few days, I didn’t think about it much, I the usage stats, and got a note from my host which noticed the spike in traffic (but coolly forgave the overage – YoHost Rocks!). All in all I just watched the final days of election coverage, and hoped all was well.

Around 11:30 on Election night I got a text message inviting me to the local victory party. I figured I should go, because with all of the hype, emotion, and passion of this past election season it seemed like an awesome opportunity. I got there; met up with the guy I worked for; and got introduced around. It went something like this:

Hey, I’d like you to meet Terry Ryan.

Hi, Terry. (Polite, but unexcited)

He’s the guy who ran the Numtopia application

Followed by me getting hugged by a total stranger.

I, uncomfortable, dismissed some of the praise because the app was pretty ugly, only to get some very informed feedback:

  • We had a pretty app from Chicago, yours did what we needed it to do
  • We couldn’t even search the other one by “ward”, which made it useless in Philadelphia
  • Yours was blindingly fast compared to the one from Chicago
  • And didn’t you write it in 3 days?

I have never been quite so thoroughly thanked and appreciated for my work ever. It was an awesome feeling. I am so gratified that I could use my skills to do something for the cause.

Now, I saw the “application for voter protection from Chicago” and I have to say it was very beautiful, with a whole lot of cool features.
I don’t want to knock those guys, as dealing with thousands of records on the State level and hundreds of thousands of records on the National level are two different things. Additionally, dealing with the demands of 1 office as opposed to 50 also changes the game. But I will say that in 3 days, 1 volunteer developer using ColdFusion replaced an application built over weeks by a team of paid developers using PHP. (I don’t know the specifics.)

I took a few lessons from my experience:

The constraint of a drop-dead-deadline can be incredibly freeing. I didn’t have time to think about things trying to come up with elegant solutions. Some of my solutions were using query of queries in a way I would never recommend. But I needed to get it done because November 4th was unavoidable.

Compelling User Interfaces aren’t always the answer. Squidhead creates usable and accessible but ugly UI components from the get go. They’re meant to be styled by custom CSS. That they were usable was the users’ only concern. They didn’t care that it wouldn’t win a design award. Their major concerns were speed and predictability.

Squidhead is better than I think it is. I’ve been down on Squidhead for awhile as it doesn’t have the user base that Transfer, Reactor, or Illudium has. Nathan Mische been telling me that the stuff he wants to add will help get the word out, and I think he’s right. However, it did stuff that I didn’t originally think it could do, or thought it would be hard to make it do. It worked flawlessly. I just need to get the word about how powerful the foreign key introspection is.

Working for the campaign was an awesome opportunity. It was pretty fulfilling compared to other volunteer political work I’ve done. But what’s more, it has pushed me to do a bit more, and have a little more confidence about the work I can produce. Expect to hear some new features coming out about Squidhead in the next few weeks.

Squidhead Still Swimming

I’m pleased to announce that Squidhead is getting some more chefs. I’m pleased to announce that Nathan Mische and Dave Konopka are joining the effort. I’m excited about both additions. Dave was working with me when I first wrote Squidhead and has already contributed some code. Nathan is, of course, the lead developer on ColdFire.

We’re still working out what’s coming down the pike, but improvements will include:

  • A more organized and useful API to the database introspection piece.
  • A better template system
  • A configuration creator and editor

It’s all towards making Squidhead more useful and extendable. Expect new releases soon.

Knowledge@Wharton High School Coming Soon

I’m pleased to report that the Knowledge@Wharton team launched a pre-release site for the upcoming Knowledge@Wharton High School. If you are, or know any high school students send them along. We’re launching in February 2009, and as part of that, are holding an essay contest with prizes. I don’t think we’ve spelled out what those prizes are, but from discussions I’ve heard, they’re pretty awesome.

It’s powered by ColdFusion 8, and Flash Video. I worked on the backend details: hosting, network configuration, database CRUD. My co-worker Sanjay did the design and UI, and my boss Dave, did the Flash video setup. The time it took us to go from mockup to finished product was very short – we did this much quicker than I had done before for non-personal project. I know it isn’t a huge site, but after years of working on nothing but backend systems that never get seen by the public, it was extremely satisfying to work on something that wasn’t behind a corporate login.

A couple other things make me happy about this. Squidhead powered backend development, which made dealing with last minute schema changes a snap. It also used the core application framework that I’ve been developing for Knowledge. It was the first project we did with the new one click build process. It uses unfuddle.com, SVN commit hooks, ANT, and ColdFusion calling ANT to allow for:

  • Automatic publishing of SVN checked-in content to a shared development server
  • 1 Click publishing of checked in content to a shared development server (In case automatic is too slow)
  • 1 Click publishing of shared development space to staging
  • 1 Click publishing of staging to production
  • 1 Click publishing of SVN checked-in content to development to staging to production

This new model allowed for both a thoughtful develop and review process during development, and was flexible enough to allow for rapid content updates when we were rolling out production.

All in all, it was awesome to use all of the stuff I learned about over the past two years at cf.Objective to do my job.

Automating Documentation Part 2

This is a follow up to the post Automating Documentation. Jim Priest wanted to see example code, and I’m happy to oblige.

For this example, I am sharing the code for documenting “steps” in Squidhead. Steps are operations that do one thing, like generate stored procs, or ant scripts, or email developers. They are powered by cfm templates in a specific folder. Once there they can be referenced through in the project’s config file. The documentation for them is included in the download for Squidhead, and can be viewed online (Squdihead – Steps).

So after the jump, here is the code for creating this documentation.

So first thing I do is setup a few parameters.

<cfset stepsXMLfile = “#expandPath(‘.’)#/stepsXML.xml”
/>
<cfset
outputfile=“w:inetpubwwwrootsquidhead2docssteps.cfm”
/>

<cfdirectory
action=“list”
recurse=“false”
directory=“#expandPath(‘../../steps’)#”
name=“steps”
type=“file”
/>

Then I conditionally build the file in case I deleted it.

<h2>Building XML</h2>
<cfif

not
FileExists(stepsXMLfile)>

<p>Transforming Steps to XML.</p>

<cfxml
variable=“XMLOutput”>
<steps>
<cfoutput
query=“steps”>
<#ListFirst(name, ‘.’)#>
<documentation />
<requires />
</#ListFirst(name, ‘.’)#>
</cfoutput
</steps>
</cfxml>

<p>Writing to Disk.</p>

<cffile
action=“write”
file=“#stepsXMLfile#”
output=“#XMLOutput#”>


<cfelse>

<p>Already Built.</p>

</cfif>

Then I go through and check to see if each file has a documentation node:

<h2>Checking XML</h2>

<cffile
action=“read”
file=“#stepsXMLfile#”
variable=“rawFile”
/>
<cfset
stepsXML = XMLParse(rawfile) />
<cfset
changed = FALSE
/>

<cfoutput>
<cfloop

query=“steps”>
<cfset stepName = ListFirst(name, ‘.’)/>

<cfif
not
structKeyExists(stepsXML.steps, stepName)>

<p>Adding #stepName#</p>
<cfset changed = TRUE
/>

<cfset stepsXML.steps[stepName] = XmlElemNew(stepsXML,stepName) />
<cfset stepsXML.steps[stepName][‘documentation’] = XmlElemNew(stepsXML,“documentation”) />
<cfset stepsXML.steps[stepName][‘requires’] = XmlElemNew(stepsXML,“requires”) />

</cfif>

</cfloop>

</cfoutput>

<cfif changed>
    <cffile
action=“write”
file=“#stepsXMLfile#”
output=“#stepsXML#”
/>
</cfif>

Next I go through and mark any documentation that is not filled in as a problem and through a 500 error. This will cause ANT to stop running.

<cfset incomplete = FALSE
/>
<cfset
stepArray = StructKeyArray(stepsXML[‘steps’]) />

<h2>Checking Documentation</h2>

<cfoutput>
<cfloop

index=“i”
from=“1”
to=“#ArrayLen(stepArray)#”>

<cfif
StructKeyExists(stepsXML[‘steps’][stepArray[i]], “documentation”) AND

len(stepsXML[‘steps’][stepArray[i]][‘documentation’][‘XMLText’]) lt
1>


<p>Documentation for step #stepArray[i]# is not filled in. </p>


<cfset incomplete = TRUE
/>


</cfif>


</cfloop>
</cfoutput>

<cfif incomplete>

<cfheader
statuscode=“500”
/>

</cfif>

I finish by writing the XML back out to disk as HTML. I’m sure this can be done through XSLT, but I don’t bother.

<h2>Building HTML Reference.</h2>

<cfsaveContent
variable=“stepReference”>

<h2>Step Reference</h2>

<cfinclude
template=“stepsDocs.cfm”
/>
<cfset
stepArray = StructKeyArray(stepsXML[‘steps’]) />
<cfset

ArraySort(stepArray,‘textNoCase’ )>

<h2>Step Details</h2>

<cfoutput>
<dl>

<cfloop
index=“i”
from=“1”
to=“#ArrayLen(stepArray)#”>

<dt>#stepArray[i]#</dt>
<dd>#stepXML[‘steps’][stepArray[i]][‘documentation’][‘XMLText’]#</dd>
<cfif
len(stepXML[‘steps’][stepArray[i]][‘documentation’][‘XMLText’]) gt
0>

<dd
class=“requires”>Requires: #stepXML[‘steps’][stepArray[i]][‘requires’][‘XMLText’]#</dd>

</cfif>


</cfloop>

</cfoutput>

</dl>

</cfsavecontent>

<cffile
action=“write”
file=“#outputfile#”
output=“#stepReference#”
/>

Then to call through ANT, I just add this line to a task in my ANT build file:

<get
src=“${project.url}/tools/stepDocumenter/”
dest=“${project.logPath}/tools/stepDocumenter.html”
/>

I then include these dynamic steps in a static document that includes some background on what steps are and what not.


Squidhead Updates and New Features

It’s been a little while since I’ve talked about new features of Squidhead, so I figured I would take the opportunity to blow my own horn.

New features:

  • Oracle Support
  • Linux Support
  • Linking table support
  • FKCrazy Application template
  • Rudimentary ColdSpring Support

Squidhead will run against Oracle 10g. It alters its model a bit to create stored procedures within packages, as this seemed to be in keeping with Oracle Best Practices.

Squidhead will run on Ubuntu. I haven’t tried it in other flavors yet, but I can’t tell why it wouldn’t.

If you follow the convention of naming tables [table1]To[table2] and have the primary keys from table1 and table2 as foreign keys, Squidhead will recognize the table as a linking table. It will create stored procedures that pass queries through this linking table to create relationships between linked tables.

FKCrazy application will use the linking table information to automatically create interface that uses all of the foreign key relationships to add and delete child records. It can do this in a one to many relationship, or in a many to many relationship. Translation: If you’ve seen my Facebook in 17 minutes presentation – everything I did manually at the end now happens automatically. So if I did it today, it would be more like Facebook in 12 minutes.

Squidhead can create its own ColdSpring configuration files. Not earth shattering but makes integrating a Squidhead application with an application already in ColdSpring much easier.

So there you go, quite the update over the past few weeks.

MSSql vs MySQL vs Oracle, Stored Procedures, and Code Generation

Over the past two weeks I’ve been struggling with adding support for Oracle in Squidhead. (Expect a release early next week.) Oracle was more different from both Microsoft SQL and MySQL than they were from each other. It got me thinking about the three and their various pros and cons for code generation, and their pros and cons for me doing code generation with stored procedures. I figured it could make a good blog article. (Or someone could see my post and say “Hey you didn’t have to do all that because of the blah command!”)

Microsoft SQL

This was the first one I started with, so the others are being compared to it. It comes with the built in stored procedure sp_help, which is absolutely brilliant. It gave more information for tables than I ever needed. It also gives most of the information for a stored procedure that I need. Although there are a few things lacking which made me have to resort to using sp_helptext, and then parsing the actual stored procedure. I had to do this to get the number of result sets that a stored procedure returns. (Because if a stored procedure returns more than one record set, Squidhead writes the method to return a structure of queries instead of just a query. ) Also I had to do this to get the metadata of the arguments being passed to a stored procedure.

MySql

Of the three this was the easiest to do. I had a flash one night (“just implement sp_help’s queries in MqSQL”) and it took me less than 24 hours to add it. Nonetheless I still had to query a lot of system tables directly to get the results I wanted.

There are few other limitations that bug me, but they aren’t related to code generation. Things like stored procedures not being able to take default values in their inputs, and views not being able to contain subqueries. I’m sure as those features mature though, we’ll see better results there.

Oracle

Finally, we reach Oracle. It took me a week of hair pulling, keyboard slamming and muttered curses to get it right. But finally I did.

What I didn’t like:

One has to create a cursor to contain output recordsets for a stored procedure. That’s so different than the other systems that I wanted to kill someone. Although, I must confess it makes counting output record sets much easier.

That everything is limited to a name length of 30 characters. So I had to get creative with how to deal with stored procedures with names like usp_entryToComment_list_foreign_key_labels. The solution I came up with was to abbreviate them and rename them higher up in the Squdihead stack so that the functions have the “correct” names.

What I liked:

Packages. Packages are sort of like classes or CFC’s for Oracle operations. The encapsulate code into discreet chunks. I basically create a package for every table so my stored procs have names like ENTRYPKG.READ which is better in my mind than usp_entry_read as that’s just a way I use to group and sort stored procedures anyway.

The sheer amount of data in the various data dictionaries. I don’t have to parse any code strings to get any metadata. That’s much easier.

“CREATE OR REPLACE.” Truly awesome. Cuts down on the amount of steps I have to do to recreate a stored proc or package. I wish I had known it exists for MySQL.

Conclusion

It’s kinda funny, and it may be cognitive dissonance talking, but of the three, I really like Oracle the best for code creation and metadata. Overall, I still lean towards Microsoft Sql as it is the one I know the best. Also whatever your thoughts on Microsoft Sql Server Management Studio, it’s so much better than SQL developer for Oracle. But that’s a completely different topic.