måndag 7 maj 2012

Using implicit parameters for Dependency Injection in Scala

A few days ago I made a post about some issues I have with the cake pattern for dependency injection. I want to illustrate a means of doing DI using scala's implicit parameters which retain the benefits of the cake pattern, but is more consice. The main benefit I see with the cake pattern is it's type safety. If we do something wrong the compiler will tell us, so we don't have to wait until we actually deploy to find out if our configuration is correct.

In order to understand how to do DI with implicit parameters we need to understand how implicit parametrs are resolved in Scala. In essence if we define a method with an implicit parameter:
When the compiler finds a call to a method with an implicit parameter it searches for a implicit val/var/def/object that provide an instance of a matching type. In this case I defined method that take a string as an implicit parameter. When I tried to call it without having an implicit value of type String in scope I got an error. Then specified an implicit val, and calling the method now prints the value of that val. We can however override this behaviour by providing the method with an argument explicitly such as:
For more information on how implicit parameters work and are resolved in Scala there are some excellent short blog posts on Daily Scala about it:
  1. http://daily-scala.blogspot.se/2010/04/companion-object-implicits.html
  2. http://daily-scala.blogspot.se/2010/04/implicit-parameters.html
  3. http://daily-scala.blogspot.se/2010/04/implicit-parameter-resolution.html

From the last post the resolution rules for implicits is summarized as:
There are very strict rules for which implicit value is to be applied to a implicit parameter. A simple way to think about it is that the "closest" definition will be used. Local scope, enclosing class, parent class, companion object of the desired type.
With this knowledge in our minds let's explore how we can use implicit parameters for dependency injection. For comparison I will show the same service/dao setup as in my previous post.
Since an implicit parameter can be provided explicitly it is trivial to mock the FooDao when testing the FooServiceImpl. I think this is perhaps the main advantage of using DI. For comparison, using this means of dependency injection the code is succint and not poluted with alot of boilerplate (12 LoC compared to 25 LoC using the cake pattern). One issue still remains. How do we ensure that FooDao is injected into the FooServiceImpl?
The main difference between the two approaches above is that if we extend Conf in our class in order to get the dependencies we need then we will have several Conf instances around and all of them will contain a fooService and a fooDao. So I think the former approach is preferable for this reason.

Another thing to notice is that in the Conf trait I have not explicitly provided FooServiceImpl with a FooDao when it is instantiated. Instead the compiler will resolve it for us using the locally defined fooDao val. Using this pattern we avoid having to spread a bunch of implicit val/def/var/object in our code. We have one central place where our configuration of the app is performed. If we make a mistake, such as defining two implicits with the same type in our conf so that the compiler don't know which to pick, we get a compiler error. Sweet!

But, wait. What if we have a circular dependency? First, try to avoid circular dependencies. Second, we could resolve it by passing a factory function to the class:
Then in our configuration do something like:
Jonas Bonér has already described this pattern in his blog post about different approaches to dependency injection. However, he states a thing I quite do not get:
This approach is simple and straight-forward. But I don’t really like that the actual wiring (importing the implicit declarations) is scattered and tangled with the application code.
This is obviously true if we choose to import the implicit definitions everywhere that we instantiate the classes. However, by using Conf as a factory for our dependencies we avoid having to import the implicits everywhere when we want a new instance of a certain "top level" class.

söndag 6 maj 2012

Problems with the Scala Cake Pattern - aka. Too much cake

A popular way to do dependency injection in Scala is to use the Cake pattern (see Jonas Boner's excellent blogg post about it). This pattern involves specifying components using traits and specifying dependencies by using self-types. My main problem with the cake pattern is that it is very verbose. To illustrate let's look at how we can use dependency injection in Java using Spring:
This is 20 lines of code for a FooService that depends on a FooDao. To get this to work we would also need a few lines of XML configuration to enable component scanning etc.

If we instead use the cake pattern we get the following.

This version is 25 lines of code. Sure, a separate xml file for the wireing is no longer necessary, we can just do: But, the problem with the verbosity still remains. The java sample consist of 20 lines whereas the scala code is 25. I believe most developer would agree that Java is a very verbose language, despite this it is still more concise than the scala equivalent! That should ring an alarm bell. If we instead of java decided to use scala in the annotation driven way then we could shrink those 20 lines even further. While this might be ok if the proportions in LOC shifted when adding more components, but it does not. As we add more and more components the boilerplate necessary will further increase the difference!

Of course LOC is not the whole story. We should also ask ourselves about the readability of the code. I have been using Spring quite a bit at work so my opinion is obviously biased because of it and I have not written alot of code using the cake pattern, but I think that the java/spring version is clearer. Also, you would likely end up having to specify all the vals and methods as protected in case you do not want to expose all the parts of the final component.

To summarize the Cake pattern makes our code verbose and (to me) hard to read due to the boilerplate necessary. I also see some other minor problems with the pattern, but my main concern is that it adds a lot of noise to the code base.

Should we just ditch the cake pattern and revert to using a DI framework like Spring? No. At least, I do not think so. Spring brings with it a bunch of fantastic features, but DI in Scala can be solved in a clean manner using some other language features. I will make a post about it next.

måndag 26 juli 2010

Implementing Type Safe Roman Numerals in Scala (type parameter extravaganza!)

I figured I wanted to do some more stuff with the roman numerals and try and create a type safe way to count with them (integers are so 2000 were living in the MMX now ;-)). Anyway what I wanted was something that could be used to express roman numerals in source files and that the compiler would say no if I expressed them in a flawed manner. It should be possible to write something like: MM I (1001), but it should not be possible to write: I MM (since that is not a proper roman numeral).

I started out with a lass in Scala:

Then using some traits I could add the ability to express simple combinations of roman numerals:

After importing the RomanNumeral class and the traits along with the contents of the RomanNumerals object I was able to write M CD since scala interprets that as M.CD and since M (defined in the RomanNumerals object) extends the NumeralM trait it has a method CD. It would also not be possible to write I M since I only extends the trait NumeralI which does not define any M method. This is all well and good, but I wanted some more expressive power. I wanted to be able to write something along the lines of M CM X I (i.e. 1911), but that is not possible because the way scala interprets spaces M CM X I would be turned into: M.CM(X).I . That means my RomanNumeral would need an apply method that could take another RomanNumeral and return a new one. So I defined one:

Now atleast I was able to write M CM X (since that is "M.CM(X)") however I was also able to write M CM M (i.e. M.CM(M)), but that would be an illegal RomanNumeral (since the M CM is 1900 it doesn't make sense to have an M after it; L, X, V, I would all be valid but not M nor C). In addition I was not able to write M CM X I since the returned RomanNumeral from the apply method had no NumeralN mixed in when instantiated (using with). I would be able to write M.CM.X.I since that indicates to the compiler that I am calling a method on the return value of the previous method (i.e. calling the method X on the return value of the method CM and so on), but I wanted to do this without the dots :).

So first things first; how could I know in th eapply method which trait I should mixin with the returned RomanNumeral so that it would have the appropriate methods available? Enter the type parameters. I first tried redefining the RomanNumeral class as follows:

However that does not work. Why you ask? Well, first some explanation of the type parameters introduced. The type parameters for the RomanNumeral is specified with T<:Numeral the <:Numeral part is a upper bound on the type that can be used for the class. So instantiating a RomanNumeral I would atleast have to pass in a Numeral or a subclass thereof. In the apply method I specify an upper and a lower bounds for the type parameter of the RomanNumeral passed in as an argument to the method. The lower bounds on the type parameter is T and the upper bound is Numeral so the type used as a type parameter of the RomanNumeral being passed to the method has to atleast be a Numeral and at most be a T (which in turn has to be a subtype of Numeral). A few examples in the interpreter might be helpfull here... (lets ignore the "with N" part of the code above)

Since NumeralI is a super class of NumeralV and a subclass of Numeral the "five" value can be supplied with the "three" value to its apply method. Doing the reverse causes a type missmatch since NumeralV is not a superclass of NumeralI. Method apply would look like this in the first instance (with the type parameters spelt out):

and in the latter case:

All this is a ok. So why doesn't the RomanNumeral code above compile? Well, because of type erasure for one and also because N could be a concrete class and not a trait(only traits can be mixed in to classes at instantiation time). After type erasure there is no way of knowing what N is and therefore the RomanNumeral class above would not compile. At this point I tried alot of different version of the above to find something which did work. In the end I realized that I could have an apply method that took a NumeralN instead and have one for each trait. That way I would know which type should be mixed in with the RomanNumeral.

This worked fine, but allowed any Numeral to follow any other. Using the interpreter to illustrate:

As can be seen above a NumeralI could be followed by a NumeralV which should not be allowed. By changing the NumeralI and its subtypes to take a type parameter I was able to add the type bounds back to the apply methods and enforce the correct behaviour. The traits now looks like this:

And the RomanNumeral class looks like:

With these modifications I could now specify the values in ther RomanNumerals object as follows:

By importing all the classes and traits and then importing the contents of the RomanNumerals object (import RomanNumerals._) I can now write my roman numerals in a type safe way. I.e. M C X V I (which is equivalent of writing: M.C.apply(X).V.apply(I) ) is allowed but M CM C is not allowed. Why? Well, since the return value of the CM method is a RomanNumeral with the type parameters NumeralL[Numeral] (and with NumeralL[Numeral] mixed in) the apply methods all look like:

and C has the type RomanNumeral[NumeralC[Numeral]] with NumeralC[Numeral] there is no apply method where C is a valid argument (C is a NumeralC[Numeral] which is a subtype of NumeralL[Numeral] and as such violates the lower bounds on the type parameter).

Next I created a rollover so that when a RomanNumeral is created that is larger than 3999 it rolls over to a number which can be described with roman numerals (i.e. > 0 and < 4000). Then to wrapp things up I simply added a companion object with an apply/unapply method to my RomanNumeral so it can easily be used in pattern matching. I also added some methods for basic arithmatic (+-/*), implemented the Ordered trait and extended the Number class. Finally I added an implicit conversion from Number to a RomanNumeral[Numeral] so that I could add Integers and Doubles and such to a RomanNumeral and vice versa (with a result type of a RomanNumeral[Numeral]).

The final result can be seen below (now I only need to find a problem for my solution :-) ):

onsdag 21 juli 2010

Roman Numerals Kata in Scala

I got back to work monday this week after roughly 12 month of parental leave (Sweden has some awesome benefits). Since the office is realy quite and I am sitting idle with no assignment I figured that I would spend my time doing some Katas in Scala. At the last Scala Meetup in Gothenburg (which I unfortunately missed) they did the Roman Numerals Kata and I figured I'd implement that to start. I have decided to go back to basics with my coding after tiering of Eclipse going real sluggish for the umphteenth time and as such I have installed emacs once again. There is a basic scala-mode for emacs that is distributed with the Scala release and there is also an compliment to it named Ensime that is under development and adds some nice features, such as code completion, sbt integration, inspection of types, etc etc. So after installing emacs, Scala, Ensime and SBT (Simple Build Tool) I started on the kata.

I usually like to start out writing tests for my code and Specs provides a pretty nice DSL way to do that. Here is the tests for my ArabicRomanConverter:

All in all pretty simple tests for border cases, the different simple cases along with a few randomly chosen numbers. Onward to the implementation code!

Here the apply function takes care of my input sanity checks, verifying that the number is actually convertable to roman numbers (at least to the roman numbers of the magnitude I am interested in). The listOfPairs is used in the convert method below.

In the convert function I have defined an recursive function called innerLoop and annoted it with @tailrec. The annotation tells the compiler to optimize the function to a loop and gives an error if the function is not tail recursive. The innerLoop is tail recursive because the last opperation in the function is a call to itself or the return value for the function as a whole.

So what does the innerLoop actually do? Well it starts out by checking if we are done and should return the StringBuilder-Num-Counter-tuple by verifying that "num", "count", the int in the "rPair" tuple are all greater than 0 (basically just a precaution). It also checks if it is "safe" to subtract the integer in the "rPair" tuple from "num". Then if the "count" is greater than "num", "count" should be decremented by the integer in the "rPair" tuple, otherwise "num" should be decremented and the String in the "rPair" tuple appended to the StringBuilder. In both cases we do a recursive call to innerLoop and try again. The innerLoop function gets called from within the foldLeft over the "listOfPairs". So basically for each romanNumPair we attempt to subtract the integer in the pair from the "num" (in the innerLoop) as many times as possible (and still keeping "num" greater than or equal to 0). The StringBuilder is then extracted from the result of the foldLeft and converted to a String.

Having completed the kata I started to ponder if there isn't a better way. The roman numbers aren't actually Strings but Numbers and it would be cool if you could add, subtract, multiply and divide them as such. So I have decided to check if there is an easy and clean way to do this in Scala... more on that another time perhaps.

tisdag 27 april 2010

A-star Scala implementation...

Working on my game I decided to do a generic version of the A* graph search algorithm. Below are the results. Basically pass in your start- and goal node, a neighbor funciton, a heuristic funciton (which needs to be not only admissable but also consistent) i.e. straight line distance, cost function between two nodes (the A* implementation will only pass in neighboring nodes as argument to the cost function).

Not sure if the internal use of a mutable set to keep track of the fringe and closed nodes is optimal...

Emil H

lördag 10 april 2010

I made a floor (more Scala and JMonkeyEngine)!

I've played around with Scala and JMonkeyEngine some more. It is always a supprise when you've been working on something and then you get an epiphany "Oh, I shouldn't do it like this...", then you change the way you go about the problem and you're basically done in five minutes. Well, ideally that is... This time that is what coding my Floor/Game Board implementation felt like.

I got the tip to use SharedMesh on the JME forum. This felt like a good fit for my floor since each Tile would be exactly the same. Initially I had created each tile with its own vertices, colors, indices and normals array. With the SharedMesh all Tiles share the data in these arrays. Niffty :)

So here I use the singleton object Tile as the TriMesh for the Tile class. I think this is a nice fit. There will only ever be a single instance of the TriMesh used to create the floor and if I later need to change some property of the floor I can just change the values in the Tile object. So when creating a new Tile I just use the apply method of the Tile object, pass in a name and the x an z offset. This is done in the Floor object below. The Floor class is basically just a Node at the moment, but I expect will grow when I start building the actual game :) The magic happens yet again in the companion object.

I have to say I realy like foldLeft and I have missed it since I stopped coding in Erlang. Everything becomes nice and clean with it. You pass in some state to a Itterable along with a function (that returns a new state) to apply for each element. No mutables that change and cause strange bugs. Sweet. Oh, and another nice feature in Scala is pattern matching. I use it in the calculatePos method. This case is a very basic example of pattern matching (I could have used a simple if-elseif-else instead) and it can be much more powerful.

And here is the result:

Next step will be to create a figure to move about my new floor, some pathfinding in a hexgrid (I think JME has an A* implementation, so I'll have a look at that). Later I'll add some basic controlls to the game.

Emil H

torsdag 8 april 2010

Scala and JMonkeEngine 2

I've started a small game development project for funsies. Getting JME2 to work properly on my linux box was a bit of a hassel initally since I wanted to have it work with maven. After a bit of tinkering and following the slightly dated tutorial (most of it is still correct as of March 2010) I got everything running smothly with a Hello3D sample in Java. However, since I wanted to use Scala 2.8, I realized I had to update Netbeans to 6.8 and get the new Scala plugin for it. So to get going a little faster I skipped the maven part for now. Setup is always easier the second time around I have noticed.

With my environment setup I created a Scala project. Included the lwjgl-, jinput- and the jme-2.0 jar and added the native bindings to the VM options. And I was ready to go!

So I quickly ran into a problem that is due to (I believe) the scala-netbeans plugin, a import error "jme is not a member of com". It seems that the issue is that the plugin had not realized that I had included the libraries. When I restarted Netbeans the following generated no errors:

A few nice features in Scala can be seen here. First, importing using a wildcard i.e. "somepackage._". In the case above I imported "com.jme._" because of that I dont have to write the entire package name for all the things I import from a subpackage of jme. Instead I can write, for instance, "app.SimpleGame" to import that class from "com.jme.app.SimpleGame". Not the greatest thing in the world, but a good thing to know when looking at Scala code, and it does clean the import statements up.

The object keyword in Scala defines a "singleton object". What this means is that there will only ever be a single instance of that object. It's kind of like a Java class with only static methods. So why is this useful? Well, for starters Scala doesn't allow static methods so it's needed to create the entry point for the program. But, there is also something called companion objects (I'll talk about them another time) and that is when objects shine :)

The rest of the code is very similar to Java code. However, we can notice that the val "box" has no declared type. "box" still has a type of course, which is inferred by the scala compiler from the assignment to it. This cuts down on a bunch of boiler-plate. Oh, and "val" is a variable which can only be assigned to once (like a variable marked with the final keyword in Java). There is also "var" which is mutable.

I expect to see more Scala power once I get things rolling, for instance a DSL for the core game rules would be nice. I'll have to come back to that...

- Emil H