Sunday, December 21, 2014

Debugging ScalaTest tests in NetBeans with the Maven ScalaTest Plugin

In the spirit of creating blog posts whenever there is something important that I've figured out and I should not lose or forget, here I go.

A month or two ago, we moved all of our Java tests at Apprenda to run using the Maven Scalatest plugin . We have been using ScalaTest from day one; however, for a while there we were using SureFire + @RunWith + JUnitRunner annotations to kick off the tests. However, since IntelliJ was quite happy to run ScalaTest tests without anything else (e.g. like the @RunWith and JUnitRunner annotations), we would sometimes end up in a situation where we have some tests that are created during development in the IDE; however, when the build is run using just plain Maven, it would skip over these tests.

So, the as described in the ScalaTest documentation on working w/ the plugin , you first disable the surefire plugin, and then add the ScalaTest Maven plugin to run the tests. That all worked great, and now all of our tests run, regardless of whether we remembered to add the @RunWith annotation.

That presented a problem for me : when we were running the tests with SureFire, NetBeans was more than happy to run and debug the tests. However, when we switched to the Maven ScalaTest plugin - no such luck .. what do to ? I'm totally fine with not seeing the graphical test runner results that NetBeans shows w/ JUnit Tests ; however, where it becomes a problem is when I can't run or debug the individual tests from the IDE. The tests run just fine when you "Test" the whole project (e.g. right click on the project and choose "Test" from the context menu), but if you try to run an individual test (e.g. right click on a test case and choose Test from the context menu), or when you're inside the test case go to the Run -> Test File menu you're out of luck, nothing shows up.

So, getting the single test to run from NetBeans turns out to be pretty simple :
1. In the Project Properties -> Actions panel, choose the "Test File" action, change the properties from "test=${packageClassName}" to "suites=${packageClassName}" . As indicated in the ScalaTest documentation, the way to tell ScalaTest what test to run is using the suites system property, e.g. :





So, that's great, but how do you debug the test ? Looking at the ScalaTest documentation I was a bit confused on how to do this, as there were a few options related to debugging, but which ones to set in order to get this from NetBeans, it wasn't entirely clear.

It turns out the fix was fairly simple :
1. Since by default NetBeans runs surefire:test to run individual tests, the goal that is run needed to be switched to just "test" - this allows the Maven Scalatest plugin to run (as it binds to the test phase and does nothing for surefire:test)
2. I had to add  a couple of additional system properties that ScalaTest needs in order to run the right test, turn on debugging, and connect the NetBeans debugger and the ScalaTest test that is running.

#This teslls scalatest to run the test with this class
suites=${packageClassName}

#These tell ScalaTest to start the new process with debugging enabled
logForkedProcessCommand=true
debugForkedProcess=true
debugArgLine=-Xdebug -Xrunjdwp:transport=dt_socket,server=n,address=${jpda.address}

# This tells netbeans to fire up the debugger and wait for a connection
jpda.listen=true

Here's an example of what it looks like :



Tada - enjoy being able to run and debug individual ScalaTest tests right from NetBeans !




Friday, March 07, 2014

Intro to Programming : Drawing a House

What is this 

To help everyone along in getting started with drawing the Minecraft Creeper, I asked my son to put together a program that draws a house. It uses a lot of similar elements to the Creeper that the students need to draw, and might show a few new things that we didn't cover in class yet.

The Code

   
  clear()
  println("setAnimationDelay() sets how fast the turtle speed is, invisible makes the turtle invisible but the line is still there")
  println("penUp() make the line disapear but the turtle doesn't, setFillColor(Color(0, 0 ,0)) and setPencolor(Color(0, 0, 0,))  changes the color")
  println("repeat(){ } makes the program repeat as many time as you want")
  setAnimationDelay(100)
  invisible()
  penUp()
  left()
  forward(150)
  setFillColor(Color(255, 0, 51))
  setPenColor(Color(255, 0, 51))
  penDown()
  right()
  //makes the square for the house, note the usage of 'repeat'
  repeat(4){
    forward(150)
    right()
  }
  forward(150)
  //this makes the roof
  setFillColor(Color(51, 51, 0))
  setPenColor(Color(51, 51, 0))
  right(60)
  forward(90)
  right(60)
  forward(90)
  penUp()
  //this makes the windows
  setFillColor(Color(51, 153, 255))
  setPenColor(Color(51, 153, 255))
  right(60)
  forward(40)
  right()
  forward(20)
  penDown()
  forward(30)
  right()
  forward(30)
  right()
  forward(30)
  right()
  right()
  penUp()
  forward(80)
  penDown()
  forward(30)
  left()
  forward(30)
  left()
  forward(30)
  left()
  forward(30)
  left()
  penUp()
  left()
  forward(80)
  penDown()
  //this makes the door
  setPenColor(Color(153, 68, 0))
  setFillColor(Color(153, 68, 0))
  forward(60)
  left()
  forward(35)
  left()
  forward(60)
  left()

The Outcome

The finished product looks something like this 


Intro to Programming : The First Class

.. And so it begins... 


This spring, I am teaching an "Intro to Programming" class at the After School Enrichment program at Craig Elementary, so this blog will get some new content with the progress of the class.

The First Class


In the first class, we covered quite a lot of ground : we learned quite a bit about each other, what the students in the class were interested in. As expected, most kids do enjoy video games, like learning how things work, and just plain enjoy working with computers. We briefly chatted about why people program in the first place : to express themselves, to tackle new challenges, and to be able to harness the power of computers for their own inventions.

We covered some of the basic concepts of what a program is : a sequence of instructions that follow some specific grammar rules (just like they do in English). We had a quick introduction to Kojo (our programming environment, more on that later) and the various elements of Kojo's user interface - the command / script area, the drawing area, the menus, the output / error message area. Most importantly, we focused on how to continue learning from the environment - using code completion / prompts to see the syntax of commands that we hadn't used before, reading the command documentation. We also spent some time learning to recover from failure, when something unexpected happens - resetting the zoom level on the drawing area (if one accidentally zoomed out too far), and just bringing up the right windows if one of them suddenly disappeared.

Our first programming exercise really didn't involve a computer : I asked my students to "program me" and command me from one side of the room to the other, while avoiding the obstacles along the way - chairs, large metal poles, backpacks. I did attempt to behave like a computer - I just followed the instructions that were given to me and failed a few times along the way - ran into an office chair, gonked my head against the metal pole. In order to tie up with what came next (which was drawing a square in our programming environment), I then asked the kids to use the same commands that they used to guide me through the room, but this time, to make me walk in a square along the tiled floor using the same limited set of commands - e.g. left, right, forward, back.


So, with the basic commands under our belt, the kids got to programming. We learned how to draw a square. First everyone came up with a short program for a square, and then we experimented with variations on the topic - some of the kids really enjoyed drawing BIG squares and seeing the turtle in action while it was drawing the squares. Then, for the more ambitious students, we tried drawing a triangle (and yes, this is where the geometry skills came shining through - what were the angles in an equilateral triangle, you ask :-) ). The example looks something like this :





The takeaway : for the kids who want a little challenge at home, I asked them to practice drawing a Minecraft Creeper, which very roughly, looks like this - it will give the kids a chance to practice some of the skills that they've already picked up (e.g. drawing squares), but also give a chance to learn a few new things as well (using background and fill colors)




Setting up Kojo

Kojo is a very kid-friendly piece of software which was created specifically to help children learn about programming and have fun along the way. It should run on all modern OS-s (Windows, Linux, MacOS), so as long as you have a desktop computer to install this on (but it will not work on a tablet), your child should be able to continue exploring it on their own. It contains a rich library of examples written in Kojo (which we will lean on in the class) - it even has a game or two that the kids can play with.

Setting up Kojo is pretty straighforward:
1. Dowload Kojo from http://www.kogics.net/kojo-download
2. Run the downloaded .jar file (e.g. by right clicking and choosing "Run" on windows), it will launch the installer - just follow through the installer and you should have an icon on your desktop that you can use to launch the software.




Monday, January 16, 2012

Tynamo committer, new Tynamo module:tapestry-jdo

The Announcement: 
First of all the announcement : as a new commiter to Tynamo, I have just released a new module, Tapestry-JDO, which allows Tapestry users to build applications based on JDO using familiar (from Hibernate and JPA integrations) Tapestry idioms for persistence manager injection and transaction management (using @CommitAfter). The official announcement is on the Tynamo Blog

The History
How that came about...

I've been a Tapestry user for a few years, usually lurking on the mailing lists and answering a question here and there , wherever I could contribute. I've always enjoyed Tapestry's approach to building web apps : pages, components, and annotation driven magic has been quite enjoyable, and the out of the box live class reloading make the experience pleasant and productive. At first I somewhat loathed the heavy emphasis on annotations to drive the framework magic; however, having dealt w/ the Grails' way of doing it for a few years, it now seems a bit  preferable for being more clear and explicit.

Fast forward a few years, Google announced the App Engine (GAE) and I was totally blown away of how they turned the world of java web app hosting on its head. Prior to GAE, Java web app hosting was both expensive and inconvenient - who would want to build publicly facing java apps, no wonder PHP ruled the roost . Of the two persistence APIs on the App Engine, JDO was the more familiar one, having spent a few years on the job working w/ JDO. Prior to GAE's choice to use JDO as a persistence API (at least in my world) it seemed like a dead end. Hibernate and JPA had taken the world by storm, and nobody seemed to care about JDO any longer. Another reason why JDO seemed interesting on the App Engine is that from its very inception JDO claimed that it was not necessarily married to relational databases. While back in the day that seemed to me like a not-so-interesting design goal, when GAE chose it as one of the options it seemed to make so much more sense to use that, as the backend store of GAE is non relational. Today, with the rise of  NoSQL, it really starts clicking (e.g. DataNucleus MongoDB  support) - all of a sudden, the same skillset that you've developed from the relational ORM days, you can apply in non-relational contexts - sweet !

So,  along the way I built a couple of apps to run on the App Engine - a site for  math problem solving assistance service and an artwork portfolio site . Neither of the apps were particularly complicated or difficult to build from a development point of view, but had just enough interaction w/ the back end where I had to deal w/ JDO directly - e.g. getting persistence managers, starting sessions, committing, all that. As much as I find Spring to be a useful tool in the toolbox, booting up a full blown spring container just too much for a simple web app. Additionally, the existence of tapestry-ioc makes Spring redundant and way too verbose for many of its possible usages within a simple web app. On the other hand, without Spring, manual transaction management is a royal PITA and I was looking for a better of way doing it. My first stab was to set up a Spring  dependency (w/o a spring container) to just get the Transaction manager classes and wire them up in Tapestry IOC. It worked pretty decently but the integration was just a kludge, not to mention the few megabytes that the four classes from Spring I used brought down with them.

Having been on the Tapestry mailing lists, the Tynamo project has continued to impress me with the suite of modules that the project kept releasing to fill needs in the tapestry community - e.g. jpa, security, routing, openid authentication, and more. Thus, when I started wondering where to look for an example of how to build the JDO support, Tynamo was the first candidate. So, I grabbed  Tynamo-JPA, moved some things around replaced all the usages of the JPA Entity Manager w/ JDO's PersistenceManager, touched every piece of code in the module and before you know it I had a working basic setup that allowed me to inject a PersistenceManager and use a @CommitAfter annotation to set up some automatic transaction management for myself. With that basic implementation I was able to finish the project that I was working on and everything looked great.

Then, came the question of what to do w/ my changes. It seemed that something like this might be a valuable contribution to the Tapestry community, yet I didn't quite feel like setting up a project somewhere . So, I figured, since I heavily borrowed from Tynamo in the implementation, why not ask them if they would be interested in having the module be a part of their project. On first thought it seemed like a long shot, my scrappy JDO module couldn't possibly fit within the pure awesomeness of Tynamo. Yet, I figured, it's worth a shot.  Shortly after I asked if they wanted it, they said yes, I zipped up the source and sent it out. I didn't necessarily plan on becoming a committer on the project, but before I knew it , Kalle had added me to the list and given me access to SVN, and posted an announcement on the blog about a new member of the project. I was a bit surprised by the warm welcome I received - I certainly felt that my contribution was appreciated.

To make the long story short, a few months later, major code and javadoc cleanup, fixing the tests and stepping through the Tynamo / Codehaus release process, the cat is out of the bag - the module is released, the announcements are out. I hope this new module is useful to someone.

Enjoy ! 

Thursday, December 01, 2011

Ad-Hockery functional testing : UI Elements and Selenium experience

This post is a response to Rob's posting on functional testing that just became too long to post in a comment...

-----------
At my job, we've used the UI elements approach pretty extensively and at this time we're moving away from it on a new project.

Conceptually, I've also enjoyed the idea that the location logic is sitting in the same tier as the elements that are being located. The support in S-IDE is also quite nice and impressive (e.g. code completion, documentation), the built-in support for unit testing the locators is quite nice.

Now, the downsides....

  • UI Elements development / future : although it is technically a part of Selenium, in practice UI Elements support don't actually evolve in lockstep with it. On a conference call/presentation w/ one of Selenium's maintainers a few months ago, when I asked about UI elements, he indicated that they might be moved out of the core (he didn't seem to be particularly fond of them, I almost had the feeling that he was tempted to just drop the feature). There isn't much of a community using them in the first place, thus if something breaks they can remain broken for a while until someone notices (when that happened to us, the turnaround was pretty quick; although I'm not sure if that is the way it is or because the author of UI elements for Selenium was a former employee).

  • The skills gap : the fact that the UI element definitions are defined outside the main "test case authoring" environment (which in our case is Groovy), makes it another distinct skill that team members (often QA folks) need to acquire. When it comes to Javascript, being a user is easy, being an author who can sling Javascript and write nifty unit tests (typically not a QA skillset) inside of the UI elements is not that easy. At least in our case it led to a bunch of not very well maintained UI Elements, w/ most of the Unit tests commented out and the knowledge of writing UI Elements withing the QA team considered almost a Dark Art. As a result of all this

  • Development cadence issues : although the usage of UI elements in a test case seems very sexy, the same "development cadence" for UI Elements is kinda slow (this could be my own ignorance as well). Basically, if you wanted to enhance a UI element, you would write some Javascript to set up the UI Element definitions and then close S-IDE and open it up again. If your unit test failed, then you would get a bunch of popups about the failing tests, the UI Elements would be disabled and you're back to editing the javascript file until the unit test passes. Only after that, you can try using the UI element in S-IDE or in your test case. Finally, if you were using selenium-server to run your "real" test cases (like we did), you need to restart selenium server (RC) so that it can pick up the updated UI Elements.

  • UI Element organization : this could be an artifact of our own failings w/ UI elements and not necessarily a problem w/ the framework itself; however, the lack of a well defined approach to structuring them has led us into a massive set of pagesets that contains UI elements for a whole bunch of different pages. Thus, when you try to pick up a UI element there isn't a good organizing principle ("Oh, yeah, I'm on the Login page, lemme see what I can locate there") - instead we have something along the lines of ("I'm in feature 'foo', let me see which one of the several hundred UI element in the fooFeaturePageSet might fit").





Although the Page object approach is not a silver bullet it addresses a bunch of the issues above:
  • Skillset - the same skills that test authors use to write test cases, they use to write the abstraction (page objects)
  • A clear organizing principle - people get it, pages have stuff in them, putting the location logic for an element on a page inside of the corresponding page object makes sense. Then, using that is just a method call
  • Deployment / Cadence issues - there is no longer the impedance mismatch of "I want to run this test case, did I re-deploy the UI elements that it depends on?". You just run the test, the testing framework will recompile the modified page objects and the test case runs, no need to do anything w/ Selenium server or anything else.
Anyway, just my 2c on the subject.

Wednesday, November 30, 2011

My Scala (and Tapestry 5) experience

Since both a couple of coworkers asked me about Scala in the last few weeks, I thought these might be interesting to read through (something that emerged over the last few days)

  •  http://codahale.com/downloads/email-to-donald.txt
  •  http://codahale.com/the-rest-of-the-story/


I have obviously not used Scala to the extent that the guy describes, but I can testify at how annoying it is the constant translation between Java and Scala (especially, since my small app, , was using Tapestry 5, which is obviously a Java framework). The conversion between the two was made worse (at least for me, a total Scala newb) by the following:

  • Classes with the same names but completely or subtly different usages and intents . The fact that the Scala library used the same class names as the Java library (java.lang.Long and scala.Long) , the default imports use them, and there are some magical conversions that occur between the eponymous types. On a few occasions I was totally baffled about something not working only to find that in the end, the code was getting the wrong type . Thus, in a bunch of my classes, I ended up explicitly having to import the Java classes that the framework expected to work with , e.g. 
      
         import java.util.{List => JList }
         import org.slf4j.Logger
         import scala.collection.JavaConversions._
    
         class FooPage {
    
    
     @Property
     private var catPieces:JList[ArtPiece] = _
    
    
            // explicitly declaring the returning types as the Java types
     def onPassivate():JList[String]= {
                    // and using the Scala provided conversions
      return seqAsJavaList(List(category,subCategory))
     }
    
         }
          
    
    and then explicitly use the java specific types. A similar but different situation exists with java.util.List and the scala List classes (e.g.  scala.collections.immutable.List) although they have the same name they have a completely different purpose (e.g. the Scala list is not necessarily intended to be created, and manipulated like the Java list); the equivalent of the java list is the recommended Scala ListBuffer
  • Null handling -  because I was interacting w/ a Java framework, there was an expectation that nulls are OK and at different places, the framework does expect methods to return nulls in order to behave in certain ways. Scala goes for the whole Option pattern (where you aren't supposed to use nulls to make it all better) and has some conversion (that I obviously, don't fully understand) between null and these types. However, because of the interaction w/ the Java framework, I had to learn how to deal with both. It kinda sucked.
  • Tapestry 5 and Scala interactions -  because Tapestry 5 pushes the envelope on being a Java framework w/ a whole bunch of annotation processing, class transformations, etc. , in some cases there were clashes between the T5 approach and Scala. In some respects, Tapestry 5 manages to be a respectable and succinct Java framework by adding a whole bunch of metaprogramming features, which when used with Scala make the scala code less attractive, e.g: 
    • Page properties that would otherwise be set up as private fields in regular Tapestry 5, now have to be declared as private fields and initialized. If you didn't declare them as private, then T5 would complain (since pages can't have non-private members as it is managed by T5) , e.g. :
        
        class Foo {      
              @Inject 
       var logger:Logger = _
         
       @Inject 
       var pm:PersistenceManager = _
        }   
      
    • Sometimes the T5 and Scala approaches seemed to clash in ways that make things complicated. For example, in the persistent objects in the class, I often annotated the private fields w/ @BeanProperty (so that Scala generates proper getters/setters for those fields).
        
        import scala.reflect.BeanProperty
        import javax.jdo.annotations.Persistent;
        class PersistentFoo {      
              @BeanProperty 
       @Persistent
       var title = ""
      
        }   
      
      Yet, when I accidentally did the same for some page properties at weird points the application would start failing (on application reload with Tapestry's live class reloading) until in pages I replaced the approach w/ Tapestry's @Property annotation (although they're supposed to do the same it's quirky w/ BeanProperty)
        
        import org.apache.tapestry5.annotations.Property; 
      
        class FooPage {      
              @Property 
       private var category:String = _
      
        }   
      


When I was working on the app, a few times I had to just stop for a day because I couldn't figure out how to do something massively simple (e.g. how to succinctly join a list of Strings into a comma separated string - stuff that would have taken me 30 seconds to do in Java, 2 seconds in Groovy and the proposed Scala solutions seemed like massive overkill). I originally started wanting to write some tests in Scala for this app, because I thought, "wouldn't it be nice to have something a little more flexible and less verbose than Java", but that still has nice static typing. Later I decided to try the whole Scala+T5 approach, and I have to admit I was pretty mad at myself when I would get stuck .

Obviously, many of my problems described above were due to my own weak Scala-foo (e.g. I had read through at least 2-3 books in order to be brave enough to try this just to learn that until I try things hands on, it doesn't stick too well), and other issues that I had were due to the interaction w/ the specific Java framework that I chose (Tapestry 5). Yet, in some ways, the experience was somewhat disappointing - having worked w/ Groovy for the last few years there is a massive difference in the approaches of the two languages. Where Groovy would often sacrifice some "internal beauty" in order to make a Java developer's life sweet and pleasant, e.g. :

  • Joining a list of strings
     
       [1,2,3].join(",");
    
  • string formatting using $ inside of strings
     
       "Blah blah $fooVar"
    
  • Null safe dereference
     
       foo?.bar
    
... whereas Scala somehow gets stuck in an ideological mode e.g.


One part of my setup that worked very  well and I enjoyed quite a bit was the Continuous Compilation and Tapestry's Live Class Reloading. Whereas for prior Tapestry pure-Java projects I had to rely on IDE magic to do some Compile-on-Save so that Tapestry can reload the changed classes, w/ the Scala setup it was much nicer.  I set up a Maven project w/ the Scala Maven plugin , and then kick off the scala:cc goal  to make it compile the changed page classes into my project. Thus, I had a completely IDE-independent setup that gave me a live-reloading experience on-par (and possibly beyond) the reloading experience with Grails.

In the end, after I managed to work through some of the issues described above, it ended up being a pretty reasonable set up and I was able to make pretty decent progress in getting the app out the door (for my wife's birthday). At the same time, I wasn't really able to leverage any cool Scala features that would magically boost my productivity, or make the codebase significantly cleaner or smaller (in some respects, it feels like the Scala based code is more verbose because of all the conversions and casting into Java types). I feel that if I knew more about Scala and was more knowledgeable about Tapestry internals, I might be able to write a Tapestr5 - Scala adapter layer that would plug into some of T5's extensions points to make it work more naturally with Tapestry (e.g. working w/ Scala lists in views, different handling of null values, etc). As a learning experience - I learned a lot, both about things that were interesting and useful (a bit of functional programming, Java/Scala integration), and some things that I really didn't want to know that much about (how Scala and T5 munge the Java classes to make the things tick).

In any event, the advice to people who like to try this kind of integration would be to allow yourself plenty of time for learning and experimentation w/ Scala and not giving up too early ( as I was almost ready to do on a few occasions). Fanf's blog has a few blog entries and a project on GitHub that is  an excellent starting point.


Wednesday, June 15, 2011

Grails, Web Flows, and redirecting out

If you read the Grails Web Flow documentation it all seems pretty straightforward - start,end states, transitions, actions, events, it's all good. However, just like any other technology that seems like magic, there always is some kind of a catch once you start using it.

One of the little 'gotchas' that I ran into was how to properly complete the flow. Now, reading up the documentation, it would seem easy - at the end of your wizard/flow, you just redirect to a different controller+action and it's all good. It all makes sense - often time, the wizard walks through multiple steps, collects some information, and when it's all done (you save your brand new Foo), you can just redirect to the details page for foo (e.g. /foo/show/1).

Well you thought it would be that easy. Not that quick...

The Grails web flow documentation is kinda deceptive like that. It shows you a simplistic example that works; however, when you try to do something more realistic, then you start getting into trouble. So, the example form the docs looks like this:

  def fooFlow = { 
       def startState { }

       def endState {
             redirect(controller:'foo', action:'bar')
       }

  }

The catch is that in their examples, the redirect is to a static URL that doesn't take any parameters.The problem comes up when in the end state you try to pass in some parameter from the flow (e.g. which often is the case e.g. at the end of the flow you want to display the details page (e.g. redirect(controller:'foo', action:'bar', id:flow.fooId) . The problem manifests itself in a weird way - the web flow would store a particular value of flow property( e.g. flow.fooId (under onditions that I couldn't figure out), and even though your current wizard might have stored a particular value from the current flow, for whatever reason it ends up redirecting to a value stored from a previous flow. So, the wizard 'kinda' worked in that it redirected to a details page at the end of the wizard, but in a large percentage of the time, it would redirect to the wrong details page. From what I could gather, the issue is that in the end state, the redirect cannot use any values from the flow, session, or flash and as a result uses some cached value (possibly from the first flow execution)

The solution to this (which is somewhere on the Grails mailing lists) is as follows: add an explicit empty "end" state (including an empty gsp to match the end state name), and in the states that transition into the end flow, issue the redirect from the penultimate state, e.g.

   def fooFlow = {
       def startState { }

       def beforeEnd {
             action {
                  redirect(controller:'foo', action:'bar', id:flow.fooId)
             }
             on("success").to "end"
       }

       def end( 
          /** note that this requries an end.gsp to be present
              in the flow subdirectory, but it never gets rendered after the
              redirect **/
       )
   }

Now, with this trick at hand, the end.gsp never gets rendered, and the client browsers do get redirected to the detail pages that you want to display, outside of your web flow.

As a more Web Flow centric alternative, you could always store the relevant object (Foo) inside the flow and display any relevant details about the object in the end state (end.gsp)

Sunday, April 03, 2011

NetBeans Database Explorer API and databases in NetBeans

At the recent NetBeans Platform Certified Training organized by Visitrend we were discussing how to work w/ the built in database functionality. NetBeans ships w/ a pretty decent database/SQL functionality out of the box - you can connect to any JDBC compliant database, add your drivers, sling queries, edit the results. It even has code completion for SQL queries - up until SQL Server 2005 it even provided better support for SQL code authoring than the built in Management tools ( and it is still better than the standard Mysql console).


Now, it turns out that for data driven applications, it is a pretty common occurrence that the users need to connect to a database. Sometimes it makes sense to hide the details of the database; at the same time, when you're dealing with sophisticated that have intimate knowledge of the underlying database schema and need to be able to deal with the underlying data, hiding the fact that they're dealing with a database just doesn't make sense. In our case, we have a team of QA Engineers who need to be able to look into all aspects of the database behind the application, so the best a tool can do is to make the setup and access to the database as easy and transparent as possible.

Thus, to solve this problem, my users need the following :
1. Make sure that the IDE has the proper drivers set up to access our test databases
2. Easy setup of the database connection with the details for a specific project/system under tes

Automatically setting up JDBC driver

Unfortunately, Microsoft's SQL Server is not one of the JDBC drivers that ships with the IDE. A new user could just navigate to the "Drivers" node in the "Services" top component and walk through the wizard to register a new driver. This can certainly be an extra step in the list of "setup instructions". But, why should a user have to remember to do that if we can do it in a module. Thus, the first hurdle we need to overcome is have a module that automatically registers the JDBC driver in the IDE :

1. First, we need to  provide the MSSQL JDBC driver.

For that, I created a new Library Wrapper module. I wanted to mention this because for whatever reason when I tried providing the JDBC driver and the XML registration below in the same module, it failed to find the JDBC driver.


In the general case for a pure JDBC driver, this should be enough. However, in order to support windows authentication, the JDBC driver needs to have a   dll included on the path. In order to sutpport jars that need native libraries, the native library needs to be placed in the release/modules/lib as indicated on Modules API Javadoc






2. Create a second module for the actual driver registration.

Add an XML descriptor for registering new drivers (named SQLServer2008Driver.xml). For MSSQL it looks like this :



<?xml version='1.0'?>
<!DOCTYPE driver PUBLIC '-//NetBeans//DTD JDBC Driver 1.1//EN' 'http://www.netbeans.org/dtds/jdbc-driver-1_1.dtd'>
<driver>
  <name value='SQLServer2008'/>
  <display-name value='Microsoft SQL Server 2008'/>
  <class value='com.microsoft.sqlserver.jdbc.SQLServerDriver'/>
  <urls>
      <url value="nbinst:/modules/ext/sqljdbc4.jar"/>
  </urls>
</driver>




I am not entirely sure of the meaning of the nbinst: prefix, but this works.

3. Add an entry into the layer.xml file of the second module to add the driver registration XML : 



    
        
            
        
    




Unfortunately, in the case of the MSSQL JDBC driver just adding the dll into the module doesn't cut it as it appears that the SQL Server driver also depends on other DLLs, so it actually needs to be in c:\windows\system32 . Thus, adding the jdbc driver dll would have been not needed, as the dll needs to be copied to windows\system32 directory. To do that, register a module installer  :


public class MssqlDriverInstaller extends ModuleInstall {

    @Override
    public void restored() {
        File mssqlJdbcDll = new File(System.getenv("windir"), "system32\\sqljdbc_auth.dll");
        boolean foundDll = mssqlJdbcDll.exists();
        
        if (!foundDll) {
            FileOutputStream system32MssqlDll = null;
            InputStream bundledDll = null;
            try {
                system32MssqlDll = new FileOutputStream(mssqlJdbcDll);
                bundledDll = MssqlDriverInstaller.class.getResourceAsStream("sqljdbc_auth.dll");
                System.out.println("Copying sqljdbc_auth.dll to windows system32");
                FileUtil.copy(bundledDll, system32MssqlDll);

            } catch (IOException ex) {
                Logger.getLogger(MssqlDriverInstaller.class.getName()).log(Level.SEVERE, null, ex);
            } finally {
                try {
                    system32MssqlDll.close();
                } catch (IOException ex) {
                    Logger.getLogger(MssqlDriverInstaller.class.getName()).log(Level.SEVERE, null, ex);
                }
                try {
                    bundledDll.close();
                } catch (IOException ex) {
                    Logger.getLogger(MssqlDriverInstaller.class.getName()).log(Level.SEVERE, null, ex);
                }
            }
        }
    }
}


Automatically connecting to the database


Now, the last step is, how to automate the creation of the database connection node now that we can now be sure that the IDE has the right driver to connect to the database: (the APIs used below are from the Database Explorer API, org.netbeans.api.db.explorer)  


public void createProjectConnection(DatabaseConfig dbc) {

        DatabaseConfig dbConfig = dbc;
        if (dbConfig == null) {
            dbConfig = DatabaseConfig.getDefault();
        }

        JDBCDriver sqlSrvDrv = findSqlServerDriver();

        if (sqlSrvDrv != null) {
            try {
                DatabaseConnection dbConn = createDbConnection(dbConfig, sqlSrvDrv);

                final ConnectionManager connMgr = ConnectionManager.getDefault();
                DatabaseConnection foundConn = findSameConnection(dbConn);
                if (foundConn == null) {
                    foundConn = dbConn;
                    connMgr.addConnection(dbConn);
                }

                final DatabaseConnection dbConn2 = foundConn;
                RequestProcessor.getDefault().post(new Runnable() {

                    @Override
                    public void run() {
                        try {
                            connMgr.connect(dbConn2);
                            connMgr.selectConnectionInExplorer(dbConn2);
                        } catch (DatabaseException ex) {
                            Logger.getLogger(Ats3ProjectDataService.class.getName()).log(Level.SEVERE, "Failed to connect to database", ex);
                        }
                    }
                });

            } catch (DatabaseException ex) {
                Logger.getLogger(Ats3ProjectDataService.class.getName()).log(Level.SEVERE, "Failed to connect to database", ex);
            }
        }
    }

    private DatabaseConnection createDbConnection(DatabaseConfig dbConfig, JDBCDriver sqlSrvDrv) {
        DatabaseConnection dbConn;
        String url = null;
        String userDb = dbConfig.getName();
        if (userDb != null) {
            if (userDb.contains("${username}")) {
                userDb = userDb.replace("${username}", System.getProperty("user.name"));
            }
        } else {
            userDb = System.getProperty("user.name") + "_sb_rc";
        }
        if (!dbConfig.getUseSqlAuth()) {
            url = String.format("jdbc:sqlserver://%s:1433;databaseName=%s;integratedSecurity=true", dbConfig.getServer(), userDb);
            dbConn = DatabaseConnection.create(sqlSrvDrv, url, "", "dbo", "", true);
        } else {
            url = String.format("jdbc:sqlserver://%s:1433;databaseName=%s", dbConfig.getServer(), userDb);
            dbConn = DatabaseConnection.create(sqlSrvDrv, url, dbConfig.getUser(), "dbo", dbConfig.getPassword(), true);
        }
        return dbConn;
    }

    private JDBCDriver findSqlServerDriver() {
        JDBCDriver sqlSrvDrv = null;
        JDBCDriver[] drivers = JDBCDriverManager.getDefault().getDrivers("com.microsoft.sqlserver.jdbc.SQLServerDriver");
        // we know that there should be at least one as this module registers it
        for (JDBCDriver drv : drivers) {
            if ("SQLServer2008".equals(drv.getName())) {
                sqlSrvDrv = drv;
            }
        }
        return sqlSrvDrv;
    }

    private DatabaseConnection findSameConnection(DatabaseConnection dbConn) {
        DatabaseConnection foundConn = null;
        ConnectionManager connMgr = ConnectionManager.getDefault();
        for (DatabaseConnection dbc : connMgr.getConnections()) {
            if (dbc.getDatabaseURL().equals(dbConn.getDatabaseURL())) {
                foundConn = dbc;
            }
        }

        return foundConn;
    }