Replicate traffic from production

For most of our services we have custom JMeter based performance tests, they try to mimic production usage of given service. JMeter has built-in support for generating random values, you can also add a CSV file with values copied straight from production, add custom delays, etc. It is nice but this is in fact an approximation. There is nothing like production ;-). Testing in production still sounds a bit scary, but one of the techniques named among Testing in Production (TiP) patterns is traffic replication.

This can be approached from several different angles, if you have something like Varnish or haproxy in front of your service most likely you can configure this quite easily. Another way is to log requests in your application and then replay request using a simple curl (or something similar) but this requires additional work.

After looking at different possibilities I came across Gor. Gor gives you something a bit different than two usecases mentioned above. In general – it can give you a flavour of continuous performance testing. So, long story short — what Gor can do for you?

In first place you need to install Gor on one of your production machines.

wget https://github.com/buger/gor/releases/download/v0.10.1/gor_0.10.1_x64.tar.gz
tar -xvf gor_0.10.1_x64.tar.gz

Gor is written in Go, so after executing this you will end up with single, self-contained executable file ;-). As a warm up let’s record some requests into a file.

sudo ./gor --input-raw :8080 --output-file requests.gor --http-allow-method POST

As you see in the example above, you can limit recorded requests only to POSTs (complete list of selectors can be found in Gor’s documentation). You can also rewrite recorded requests in all sort of ways using your own custom scripts. Nice. It is time for first downside of Gor — there is no stop button, in order to stop the recording you need to shutdown the Gor process (correct me if I am wrong).

In order replicate traffic, you need to cast the following spell:

./gor --input-file requests.gor --output-http "http://test01-example.com:8080"

Gor has some nice additions, which allows you to use this traffic for performance tests, you can specify the multiplier which will increase (or decrease) number of requests.

./gor --output-http-stats --input-file "requests.gor|200%" --output-http "http://test01-example.com:8080"

If you want to setup the constant traffic replication from your production machine into staging, you can run Gor in the following way:

sudo gor --input-raw :8080 --output-http "http://test01-example.com:8080"

It is worth to mention that this is the original usecase of Gor and the reason why it was created (this explains why it does not have STOP button).

You can now sit and look at dashboards of your test instance and compare response times / error rates etc with production. In practise this may be of course a bit more complicated, but this is a perfect case for read-only services which is not that rare this days.

As for downsides:

  • Both traffic recording and replication adds load to your production machine, you need to remember about this. In general I would prefer setting traffic replication using external proxy, but in my current use case this fits perfectly
  • Gor requires root access, which might be problematic in some hosting environments
  • Gor can loose significant portion of requests — which is definitely not nice but it is getting better with every release.
  • Lack of switches which allows to disable/enable Gor for a while is a bit annoying
  • Replaying requests from file does not leave original time spacings between requests 🙁

Take a look at Gor, it is not perfect but it embodies a very useful idea. I wonder what do you think about it and how do you deal with traffic replication.

Everything you ever wanted to know about uservices

A few weeks ago I had a chance to pariticipate in GeeCON microservices (http://2015.microservices.geecon.org/) conference. We’ve (GeeCON team) always dreamed to have a conference near the sea side, so the Sopot sounds like a perfect place. In this post I wanted to summarize my observations and notes taken during the conference. Not all presentations were described here, for the rest of my impressions take, a look at my twitter feed.

“It’s Not Just MicroServices: Areas of focus for MicroService Success”, Fred George

During his opening keynote Fred introduced a Cynefin Framework, which to some extent explains the nature of the complex systems. Cynefin framework divides problems into 5 domains/types: obvious (or simple), complicated, complex, chaotic (we can make some assumptions and look for the answer) and disorder (lack of knowledge about nature of the problem). As far as I understand, in the beginning every problem solving quest starts in the disorder and afterwards can be categorized as one of obvious, complicated, complex or chaotic. The problem is that in the beginning we don’t know what kind of problem we are facing, so we don’t know what kind of tools are applicable (Cynefin framework makes some suggestions in this matter). Fred mentioned that most of the problems faced by modern companies can be categories as choatic. In this case we need to look for the new knowledge and learn quickly to get closer to optimal solution — which is unkown.

What can be done to address such a use case? On the technical level, microservices looks promising here. This paradigm is well suited for cloud environments assuring scalability and productivity. It stress the discardability of software which is essential when we need to build prototype to verify our hypothesis and discard it when hypothesis is wrong. Fred presented the most common uservice architecture with the shared, dumb event bus, which gives async access to events to any service which is interested in particular piece of problem domain. After diving into technical aspects, Fred jumped into organization of work in microservice environments. I will just refer to my twitter notes here:

  • Kill the specialists, we need full stack problem solvers. In theory specialists are more productive but in practice specialization is causing the communication overhead and may lead to delays.
  • New project means you will get new job title
  • Fun — was one of the most frequent words in Fred’s keynote – when people have fun, they are more engaged and effective
  • It takes 3 to 10 weeks to build efficient team, so forming a team is a cost, that is why you should bring the work to the team

Microservices are usually mentioned as a technical paradigm but after Fred’s keynote it was clear that this approach needs to be reflected in the organizational structure.

“Swimming upstream in the container revolution”, Bert Jan Schrijver

Bert gave a very nice talk in which he described a few practices which he and his teams have adopted. Once again I will refer to my twitter notes and I will mention only the most interesting things from my personal point of view:

  • “Hands off” policy – No logging into servers, need a change or update, just update your puppet manifest. This one is crucial if we want to triet our infrastructure as code.
  • Nevertheless testing Puppet manifests before pushing them into production might be tricky
  • AWS has its limits (e.g. your pick of traffic might occur at the same time as others)
  • “Don’t depend on availiability of Ops experts” – this kill team progress.

Slides can be found here.

“Microservices – enough with theory, let’s do some coding”,
Tomek SzymaƄski and Marcin Grzejszczak

This was the only presentation with the live coding — Tomek and Marcin produced around 5 lines of code ;-). Nevertheless it was the most technical presentation of the conference. What sleeps under the hood of the microservices ecosystem? Monitoring, alerting, deployment, dependency resolution and discoverability of services, all this on top ZooKeeper, Grafana, Ansible, ELK, Spring Boot, Hystrix, Rundeck and Slack ;-). Since “sudo apt-get install microservices” still does not work, we need to deal with the plethora of tools to automate all the steps necessary to build microservices. All the code examples were based on Accurest and 4Finance spring boot micro infra (https://github.com/4finance/micro-infra-spring).

Among other things Tomek and Marcin mentioned a very important practical feature/pattern – correlation id. In order to track the flow of the particular request in the stack of a few dozens of microservices you need to assign a unique identifier to every user request and pass it downstream. This allows you to visualize the request flow and debug/monitor your system.

In general it was a very nice presentation, nevertheless 30min was definately not enough to cover all the topics prepared by presenters.

“Scaling microservices at Gilt”, Adrian Trenaman

Adrian described the whole evolution of architecture at Gilt, starting from a successfull startup to a mature company who still want to innovate and grow.

  • Voluntary adoption – give engineers freedom to choose tools their like. This is something similar to the theory of evolution 😉 productional use will verify which tools are accurate. Devs are smart enough to choose tools which works instead of new, shiny javascript framework (no offence).
  • Once again it was stressed that microservices are tighly connected to how organization operates.
  • Dark canary and Testing in Production (TiP) are prefered over traditional testing. Adrian mentioned that in case of read-only services TiP is very natural 😉 especially when implemented in a facebook way.
  • It’s starting to be a nice tradition, after opensourcing of Hermes by Allegro at GeeCON in May, also Gilt.tech decided to release their tool Ion roller during the GeeCON in Sopot! Hope to see more opensource activity during GeeCONs 😉

I was under huge impression of Adrian’s presentation and really glad that he decided to visit Sopot. Slides can be found here.

“Architecture without Architects”, Erik Dörnenburg

Erik’s talk was a closing keynote of entire conference. He was talking about the role of the architect in the modern software industry. Comparing software industry to the traditional architecture is wrong, software architecture is more like a town planning e.g. take a look at evolution of London and compare it to a long living IT systems.

Erik presented a few very striking examples of how diagram abstraction is implemented by real systems, we as a developers need to be aware of abstractions which surrounds us. That is why architecture (abstraction/planning) and development cannot be separated. In the conclusion Erik stated that there is no reason to maintain the role architect as we know it, because most of the activities performed by archictects can and should be handled by developers.

Summary

I’ve really enjoyed this conference from both perspectives: as an organizer and as an participant. After this gig I am a great fan of single track conferences, this requires a lot more from the organizers — to select only the best talks (which is not that hard when you deal with focused event). But you don’t have this unpleasent feeling that you are missing something interesting like one usually have with multiple tracks.

Thanks to everyone who were involved in the organization, especially to Kuba Marchwicki, Tom Bujok and MichaƂ Gruca our awesome collegues from Tricity JUG. Apart from these guys I need to mention Adrian Nowak (jesteƛ zajebisty!) and Ɓukasz Stachowiak who were the most active during the organization of the event.

Summary of 2014

Yup, it is almost March and I am doing a summary of previous year. In general 2014 was great but the beginning of 2015 is a real blast so far ;-). Last year after reading “Make summaries not promises” I decided that this a way to go. So, here it goes, the most relevant events of 2014 (related to my professional life).

Changing a job

After 8 years in PSNC, I’ve changed employer. It was definitely a good decision, it was not an easy one but in general I am really glad that I’ve made it. At PSNC I had a chance to work with great people, on awesome things. I had visited several nice places including my beloved Balkans, Istambul, Lisbon and many, many others. Librarianship is not considered as something very exciting but librarians who I have met are very far from that (they are more like this ;-)).

Allegro is a completely different pair of shoes, I really like the atmosphere of this company. Sometimes I am feeling like a kid in candy shop, there are so many technological candies here, it is hard to decide which one to eat first ;-). Really great people, whole load of stuff to learn — good choice.

Conferences/JUGs

I am organizing all kinds of community events since 2007. When I am attending meetup/conference it is highly probable that I am helping to organize it in some way or another (sic!). I had a firm resolution to change that.

In 2014 I had an opportunity to speak at three great conferences: Devcrowd 2014, Atmosphere 2014 and last but not least Polyconf. Apart from that, thanks to my JUnit talk (who could expect that people would like to listen about good old JUnit) I’ve visited PoznaƄ JUG and TrĂłjmiasto JUG (thanks to courtesy of Jakub Marchwicki) — really nice experience.

Polyconf is something new. I was really impressed by the diversity of topics/languages and approaches which was touched during this conference. I am really looking forward to go there in 2015.

When it comes to regular conference attendence, I’ve never had a chance to participate in one more, totaly focused event. Most of the dev conferences are covering broad set of topics (depending on speakers who are available) related to all kinds of software development areas. In November I’ve participated in Lucene/Solr Revolution in Washington D.C. Conference entirely dedicated to SOLR/Lucene. I must say, that this kind of approach gives really nice results. To some extent it was an inspiration for Test Driven GeeCON, which took place in Jan 2015.

Technical reviewer

For the second time (after NetBeans 7 Cookbook) I had a chance to appear as a technical reviewer for one of the Packt Publishing books. I broke my relationship with Java EE after leaving PSNC but I was honoured when my former students and great colleagues MichaƂ MatƂoka and MichaƂ Ćmil asked me to take a look at their very first book. “Java EE 7 Development with Wildfly” is a really nice compendium of knowledge about Java EE 7. If you are looking for a book which will show you testable face of enterprise flavour of Java this is the right choice. Once again congratulations for getting to an end with that guys!

What I didn’t like in 2014

I gave up on running, this was the least time consuming form of sports which I know. I’ve promised myself that I will come back to this. Thankfully I am stil an active biker ;-).

2015

As I said in the beginning 2015 started with a series of very important things, our third child was born (Matylda), together with Ɓukasz Stachowiak (and others at GeeCON team!) we’ve managed to organize (in just two months) Test Driven GeeCON. Looking at these things it seems that 2015 might be even better than 2014.

As one of my resolutions, I’ve promised myself to resurrect this blog (and support of course allegrotech.io) with at least 6 posts. Apart from that I hope that 2015 will be the year of open source for me. I’ve never contributed to OSS and it’s time to change that.

Automatically start JavaDB process in Eclipse

In my previous post I’ve mentioned that NetBeans IDE starts a background process with Derby every time when it is required during Glassfish startup. You can have a similar thing in Eclipse. After installing Glassfish AS Adapter for Eclipse, go to Window>Preferences>GlassFish preferences (or type “glass” in filter box), check “Start the JavaDB database process when Starting GlassFish server” and voila!

Global Day of Coderetreat in PoznaƄ

Recently, I had a chance to participate in organisation of PoznaƄ’s part of Global Day of Code Retreat. PoznaƄ JUG together with PoznaƄ GTUG (which stands from Google Technology User Group) have organised 2nd Code Retreat in PoznaƄ. Just to remind you, first one was held in January 2011 and resulted (apart from loads of great code) with this awesome movie (thanks to Zbigniew Wantuch).

But let’s get to the merit, our event was a part of Global Day of Code Retreat (GDCR). The idea is simple, on 3rd of December developers from all around the world joins their local Code Retreats to experiment and improve skills in the area of Test Driven Development, Pair programming and clean object design. Gathering 2000 developers and forcing them to deal with Conway’s Game of Life itself is awesome, but it seems that apart from that, Corey Haines (and others) managed to create a real, interesting community around the idea of Code Retreat – which is ultimately awesome ;-).

Thanks to courtesy of PoznaƄ University of Technology we had a really nice venue for our Code Retreat. Initially I was hoping to get more than 40 but finally we had 35 registered participants, 20 of them managed to get to the event. What’s interesting, only three or four people were attending our first CR.

Most of people who came to CR were familiar with coding in Java, but there were also a few working in Javascript and Ruby. In an announcement I wrote that knowledge about programming in Java is obligatory (as a common denominator) but pairs can work in whatever language they want. It seems that this requirement is not necessary, because mixed-language pairs had a lot of fun.

As for knowledge about TDD and experience in pair programming, most of participants knew that these techniques exist but did not have a chance to practise them in the past. So, during the first two session (and retrospectives) our great facilitators Wojtek Buras (from PSI Polska), Jakub Milkiewicz (from PoznaƄ JUG) and PaweƂ Stawicki (from Software Mill) had quite a lot to do.

We had 5 sessions:

  • session #1 – No constraints
  • session #2 – Develop a solution to the problem with the constraint that no computers are to be used in the first 10 minutes
  • session #3 – Develop a solution in which no primitives can be used – the solution must be built on classes;
  • session #4 – TDD as you meant it
  • session #5 – No touchpad/mouse, small methods (<5 lines)

During session everything went very smooth (at least IMHO). Apart from great coding session we had really cool retrospectives (people didn’t have any problems to speak about their problems and discuss).

When we will organise CR next time we will surely add one more session (maybe even two). I have also my personal wish list of constraints ;-):

  • while reading notes from hosts of other CR I saw that silent session (or Mute Evil Pair) was really funny one. In this exercise developers are not allowed to speak, they can only communicate through code.
  • Another interesting idea is legacy code session. After the previous session ends no one is allowed to delete code (!), instead of swapping pairs, people swap their workstations.
  • No loops/no conditionals because this sounds like a real challenge 😉

Considering that everyone is asking when we will have next CR it seems that we had really successful workshop ;-). Pictures from CR are available here.

Big thanks to Ɓukasz Stachowiak and Daniel Mendalka for their organisational effort – thanks guys!

Sniffers

This is a quite old post, it was not published earlier because I wanted to investigate this topic a bit more. Hope that you will find this useful (and not very misleading).

SEVERE: Exception while deploying the app [arquillian-example] : 
  Sniffers with type [connector] and type [ejb] should not claim the archive at the same time. 
  Please check the packaging of your archive [/..../arquillian-example]

Maybe some of you’ve experienced this kind of error while working with Glassfish. I had this one when started to work with new project (Maven driven, JEE 6 app with Arquillian/JUnit and Glassfish embedded). Project structure was taken from my previous developments, I’ve added literally one class. First deployment and this quite cryptic error message stopped my work.

While looking for some hints, I’ve learned that sniffers are analyzing archive submitted to application server. If you take a look into domain.xml in your Glassfish instance you will find that applications (applications/application) can have various sniffers associated with them e.g.

<applications>
 <application context-root="/test" location="${com.sun.aas.instanceRootURI}/eclipseApps/test/"
           name="test" directory-deployed="true" object-type="user">
    <property name="appLocation" value="${com.sun.aas.instanceRootURI}/eclipseApps/test/"></property>
    <property name="keepSessions" value="true"></property>
    <property name="defaultAppName" value="test"></property>
    <module name="test">
       <engine sniffer="jpa"></engine>
       <engine sniffer="security"></engine>
       <engine sniffer="web"></engine>
       <engine sniffer="weld"></engine>
    </module>
</application>

This app has JPA/security/web and Weld. From the error message it seems that some sniffers cannot be assigned together with others.

To find out what is going on, I took a look at what is placed in the final WAR. I discovered that Maven is adding glassfish-emb-all.jar to WEB-INF/lib. Quick look at pom.xml and it looks that dependency to glassfish-emb was not properly scoped (lack of <scope>test</scope>). Without glassfish-emb in war everything went smoothly, app was properly deployed.

Oh, well just a stupid mistake in pom. But wait. What is especially interesting in this issue, it seems that sniffers assigment is done only once. This means that if you deploy your app with correct dependencies and afterwards you will accidentaly include glassfish-emb-all into your WAR everything will be fine. At least till the next change of application server which might happen when moving to production web server.

24 easy steps to JEE 6 development with Eclipse

As some of you probably know I have a privilege to teach Java related stuff at PoznaƄ University of Technology. During the lab sessions which I am leading we were going through configuration of Eclipse IDE to work with JEE 6. I am using Eclipse since I started to write in Java, some things are obvious for me. We wanted to work with Glassfish, Apache Maven and Subversion which are quite common tools, so it should be quite simple to force Eclipse to work with this setup. Let’s start.

  1. Download Eclipse IDE for Java EE Developers
  2. Unpack and run
  3. Now, I will have to install m2eclipse, I would like to use Eclipse Marketplace because it does not require knowledge about update sites for plugins. So, I choose “Help” and than “Eclipse Marketplace”.
  4. I am searching for m2eclipse… ok, there is something with Install button here [click].
  5. Accept the license agreement
  6. Confirm that I want to install unsigned jars
  7. Installation was sucessful, Eclipse is asking if I want to restart IDE – “ok, if you have to”.
  8. I know that I need also m2eclipse-extras for integration with Eclipse Web Tools Platform.
  9. I am choosing Eclipse Marketplace and searching for m2eclipse-extras – without luck.
  10. It seems that I need an update site for this. From Help menu I am choosing “Install new software”
  11. Starting to work with new Update Site for m2eclipse-extras
  12. I need to choose out of 4 bundles from that site. I cannot click “select all” because I don’t have Sublipse plugin installed, anyway who is using SVN nowadays? 😉
  13. After choosing right bundles it is time to accept license agreement
  14. Once again I need to confirm that I want to install unsigned jars
  15. Installation was sucessful, Eclipse is asking if I want to restart IDE – “ok, if you have to”.
  16. It is high time to import my project into workspace, choose File>Import>Import existing Maven project (but who knows maybe “Import existing project” would also work ;-))
  17. At this stage m2eclipse shows warning that my Eclipse is running on JRE not JDK and this may cause some problems
  18. To correct this I have to edit eclipse.ini and add “-vm” parameter with path to JDK (in separate lines – but everyone knows how .ini file works)
  19. Ok, now it is time for Glassfish, I choose Window>Preferences>Server Runtime and click on Add button
  20. There is no Glassfish here, so I choose to download additional server adapters.
  21. Ok, now I can choose out of four different Glassfish types (finally I will choose appropriate).
  22. During creation of new server runtime, I have to remember to point to glassfish-3.1/glassfish, (base glassfish-3.1 directory is not enough).
  23. After creation of new adapter, it should be possible to run imported application using Glassfish. It should be but…
  24. Eclipse says that in order to run Glassfish I need JDK – what the heck, I’ve just changed my eclipse.ini. Someone told me that I have to define new “Installed JRE” which is pointing to my JDK installation.

Only 24 easy steps – great!  😉

I’m not a huge fan of NetBeans but please take a look how NetBeans IDE deals with this issue.

  1. Download NetBeans (with Glassfish)
  2. Run NetBeans and Glassfish installer
  3. Run NetBeans IDE
  4. From File menu choose Open project
  5. From Tools menu choose Servers, ensure that Glassfish is in place. (This step is unfortunately necessary :(, without this NetBeans claims that there is no Glassfish defined – you just have to open dialog window, nothing more).
  6. Run Project

NetBeans will even launch Derby network server on its own if you use this DB in your project. I do not have nothing against Eclipse puzzles but it seems that it simply works!

I know why all this have to be done with Eclipse, but the question is who will bother with this if he can just launch Visual Studio 😉. In fact it is just a matter of reasonable defaults, is it really so hard? Maybe JEE is too new to get proper support in Eclipse?

Tesseract 3.0 installation on Ubuntu 10.10 server

This is a cross-post, original post can be found here.

Tesseract is an optical character recognition (OCR) engine originally developed by Hewlett Packard, in 2005 it was open sourced under Apache license. Its development is now supported by Google. Version 3.0 was released in September 2010 apart from other things this version offers support for Polish language.

Wiki at Tesseract website is a bit messy, that is why I decided to describe my experience with building and installation of Tesseract 3.0. I was working on Ubuntu 10.10 server edition, deployed on virtual machine created using Oracle Virtual Box.

First, I’ve install build-essential and autoconf (it may be also required to install libtool):

sudo apt-get install build-essential
sudo apt-get install autoconf

Next, step according to Tesseract wiki is to install dependencies:

sudo apt-get install libpng12-dev
sudo apt-get install libjpeg62-dev
sudo apt-get install libtiff4-dev
sudo apt-get install zlib1g-dev

Please note, that the name of zlib1g-dev package is misspelled in the wiki.

I have downloaded sources of Leptonica 1.6.7 from its Google Code website and than followed rather standard build process (you may also try to install libleptonica-dev package instead):

./configure
make
sudo make install
sudo ldconfig

The next step was downloading tesseract-3.00.tar.gz from Tesseract project website. Uncompress archive, go to tesseract-3.0 directory and invoke:

./runautoconf
./configure

After invoking ./configure you should check config_auto.h if dependencies were recognized correctly by ./configure script. Header file should contain #define for HAVE_LIBLEPT, HAVE_LIBPNG, HAVE_LIBTIFF, HAVE_LIBJPEG and HAVE_ZLIB.

make
sudo make install
sudo ldconfig

Without ldconfig you might experience problems with launching Tesseract.

Download languages of your choice from Tesseract website and place them (uncompress first) in your tessdata folder (by default /usr/local/share/tessdata).

Now run the OCR using:

tesseract phototest.tiff out.txt -l eng 
more out.txt

Hope that this was helpful.

Update (19th of October 2011):
I was trying to complie revision 627 of Tessearct on my Ubuntu 11.04, after compilation of Leptonica, and invoking ./configure for Tesseract source code I was still getting “leptonica library missing” error. Everything went smoothly after adding, these two lines in the beginning of the configure file.

CPPFLAGS="-I/usr/local/include" 
LDFLAGS="-L/usr/local/lib"

This solution was found here – thank you.

Videos from PoznaƄ NetBeans Platform Training 2010

It took me a while, but finally I managed to publish two videos from NetBeans Platform Training which was held in January 2010 in PoznaƄ. If you are interested in NetBeans Platform Services Architecture take a look at these two great lectures delivered by Toni Epple:

  • NetBeans platform Service Architecture (1) – Lookups
  • NetBeans platform Service Architecture (2) – Registry

More details about this remarkable event can be found at PoznaƄ JUG website.

Algorithms of the intelligent web – review

Thanks to MEAP and PoznaƄ JUG I had a chance to read “Algorithms of the intelligent web” by Haralambos Marmanis and Dmitry Babenko. Content is organized into seven chapters, starting with general introduction which gives a broad overview of state-of-art in the field of modern web application. Second chapter offers a few bites of theory and finally practical example of building simple search engine. You can also find information about using classifiers, creation of recommendation systems and document clustering. Final chapter presents complete example of news portal which incorporate all introduced techniques in neat working solution.

Chapters from two to six have similar structure, starting with some theory necessary to understand presented concepts, then some clear examples presenting real word usage. Examples are extended with some additional more advanced features but everything is still perfectly understandable. Readers would learn how to adopt existing APIs (eg. digg.com), how to aggregate and transform content in order to create innovative mashups. After practical part, readers will find some notions about usage of presented solution in production. Authors describes common mistakes which leads to dead ends during implementation of modern intelligent web applications and this is definitely one of the biggest advantages of this book. What is also worth to mention, Marmanis and Babenko emphasize the role of quality of results and show general ways in which one can evaluate obtained outcome. At the end of each chapter readers can find TODOs, a section with tasks that maybe done in order to utilize presented solutions better.

All examples are delivered in BeansShell and Java. Nowadays, in the age of frameworks like Grails or Ruby on Rails the choice of BeanShell is quite unexpected. Examples in JRuby or Groovy could simplify adoption of presented solutions in real life web applications. But this is a minor thing, BeanShell is very similar to Java, so none Java developer should have problems with understanding examples. In MEAP-copy of book which I have evaluated there was also no information about how to run presented examples nor that knowledge about Java or BeanShell are required. I hope that would be improved in final release of book (from that what I’ve read in answer to my feedback those issues were addressed in final version). Authors presents quite a few open source libraries which can be easily use not only during creation of intelligent web applications but also in everyday work of Java developer.

What’s missing? I would love to read more about OpenSocial API which is only mentioned in first chapter of the book. Another thing that is missing are some references to so called Web 3.0, I’m constantly looking for a comprehensive overview of semantic web applications (eg. OpenCalais, Hakia). Creating a small semantic enabled application would definitely be a plus.

„Algorithms of the intelligent web” is definitely worth to recommend to all developers who want to gain knowledge about some useful Information retrieval and Machine learning techniques. Those techniques are presented in a very clear and understandable way. Book contains universal methods and algorithms, knowledge like this does not get old so fast (like for example web frameworks). I would definitely come back and read this book again.