Posts in English
Search
My posts from outside
Archives
Posts in English
For most of our services we have custom JMeter based performance tests, they try to mimic production usage of given service. JMeter has built-in support for generating random values, you can also add a CSV file with values copied straight from production, add custom delays, etc. It is nice but this is in fact an approximation. There is nothing like production ;-). Testing in production still sounds a bit scary, but one of the techniques named among Testing in Production (TiP) patterns is traffic replication.
Also, somebody has way too much time on their hands. pic.twitter.com/Gae1nhkG8Y
— Jeff Darcy (@Obdurodon) August 17, 2015
This can be approached from several different angles, if you have something like Varnish or haproxy in front of your service most likely you can configure this quite easily. Another way is to log requests in your application and then replay request using a simple curl (or something similar) but this requires additional work.
After looking at different possibilities I came across Gor. Gor gives you something a bit different than two usecases mentioned above. In general – it can give you a flavour of continuous performance testing. So, long story short — what Gor can do for you?
In first place you need to install Gor on one of your production machines.
wget https://github.com/buger/gor/releases/download/v0.10.1/gor_0.10.1_x64.tar.gz tar -xvf gor_0.10.1_x64.tar.gz
Gor is written in Go, so after executing this you will end up with single, self-contained executable file ;-). As a warm up let’s record some requests into a file.
sudo ./gor --input-raw :8080 --output-file requests.gor --http-allow-method POST
As you see in the example above, you can limit recorded requests only to POSTs (complete list of selectors can be found in Gor’s documentation). You can also rewrite recorded requests in all sort of ways using your own custom scripts. Nice. It is time for first downside of Gor — there is no stop button, in order to stop the recording you need to shutdown the Gor process (correct me if I am wrong).
In order replicate traffic, you need to cast the following spell:
./gor --input-file requests.gor --output-http "http://test01-example.com:8080"
Gor has some nice additions, which allows you to use this traffic for performance tests, you can specify the multiplier which will increase (or decrease) number of requests.
./gor --output-http-stats --input-file "requests.gor|200%" --output-http "http://test01-example.com:8080"
If you want to setup the constant traffic replication from your production machine into staging, you can run Gor in the following way:
sudo gor --input-raw :8080 --output-http "http://test01-example.com:8080"
It is worth to mention that this is the original usecase of Gor and the reason why it was created (this explains why it does not have STOP button).
You can now sit and look at dashboards of your test instance and compare response times / error rates etc with production. In practise this may be of course a bit more complicated, but this is a perfect case for read-only services which is not that rare this days.
As for downsides:
Take a look at Gor, it is not perfect but it embodies a very useful idea. I wonder what do you think about it and how do you deal with traffic replication.
A few weeks ago I had a chance to pariticipate in GeeCON microservices (http://2015.microservices.geecon.org/) conference. We’ve (GeeCON team) always dreamed to have a conference near the sea side, so the Sopot sounds like a perfect place. In this post I wanted to summarize my observations and notes taken during the conference. Not all presentations were described here, for the rest of my impressions take, a look at my twitter feed.
“It’s Not Just MicroServices: Areas of focus for MicroService Success”, Fred George
During his opening keynote Fred introduced a Cynefin Framework, which to some extent explains the nature of the complex systems. Cynefin framework divides problems into 5 domains/types: obvious (or simple), complicated, complex, chaotic (we can make some assumptions and look for the answer) and disorder (lack of knowledge about nature of the problem). As far as I understand, in the beginning every problem solving quest starts in the disorder and afterwards can be categorized as one of obvious, complicated, complex or chaotic. The problem is that in the beginning we don’t know what kind of problem we are facing, so we don’t know what kind of tools are applicable (Cynefin framework makes some suggestions in this matter). Fred mentioned that most of the problems faced by modern companies can be categories as choatic. In this case we need to look for the new knowledge and learn quickly to get closer to optimal solution — which is unkown.
The Cynefin framework #geecon #microservices @fgeorge52 pic.twitter.com/Si8s8Dk8r2
â Szymon Stepniak (@wololock) September 11, 2015
What can be done to address such a use case? On the technical level, microservices looks promising here. This paradigm is well suited for cloud environments assuring scalability and productivity. It stress the discardability of software which is essential when we need to build prototype to verify our hypothesis and discard it when hypothesis is wrong. Fred presented the most common uservice architecture with the shared, dumb event bus, which gives async access to events to any service which is interested in particular piece of problem domain. After diving into technical aspects, Fred jumped into organization of work in microservice environments. I will just refer to my twitter notes here:
Microservices are usually mentioned as a technical paradigm but after Fred’s keynote it was clear that this approach needs to be reflected in the organizational structure.
“Swimming upstream in the container revolution”, Bert Jan Schrijver
Bert gave a very nice talk in which he described a few practices which he and his teams have adopted. Once again I will refer to my twitter notes and I will mention only the most interesting things from my personal point of view:
Slides can be found here.
“Microservices – enough with theory, let’s do some coding”,
Tomek SzymaĆski and Marcin Grzejszczak
This was the only presentation with the live coding — Tomek and Marcin produced around 5 lines of code ;-). Nevertheless it was the most technical presentation of the conference. What sleeps under the hood of the microservices ecosystem? Monitoring, alerting, deployment, dependency resolution and discoverability of services, all this on top ZooKeeper, Grafana, Ansible, ELK, Spring Boot, Hystrix, Rundeck and Slack ;-). Since “sudo apt-get install microservices
” still does not work, we need to deal with the plethora of tools to automate all the steps necessary to build microservices. All the code examples were based on Accurest and 4Finance spring boot micro infra (https://github.com/4finance/micro-infra-spring).
Enough talking – let’s look at some diagrams đ @szimano @MGrzejszczak at #geecon pic.twitter.com/3A245VcPCU
â Adam Dudczak (@maneo) September 11, 2015
Among other things Tomek and Marcin mentioned a very important practical feature/pattern – correlation id. In order to track the flow of the particular request in the stack of a few dozens of microservices you need to assign a unique identifier to every user request and pass it downstream. This allows you to visualize the request flow and debug/monitor your system.
In general it was a very nice presentation, nevertheless 30min was definately not enough to cover all the topics prepared by presenters.
“Scaling microservices at Gilt”, Adrian Trenaman
Adrian described the whole evolution of architecture at Gilt, starting from a successfull startup to a mature company who still want to innovate and grow.
Microservices architecture at @gilttech at #geecon pic.twitter.com/sWdwcZHGf8
â Adam Dudczak (@maneo) September 11, 2015
I was under huge impression of Adrian’s presentation and really glad that he decided to visit Sopot. Slides can be found here.
“Architecture without Architects”, Erik Dörnenburg
Erik’s talk was a closing keynote of entire conference. He was talking about the role of the architect in the modern software industry. Comparing software industry to the traditional architecture is wrong, software architecture is more like a town planning e.g. take a look at evolution of London and compare it to a long living IT systems.
Code city visualization (http://t.co/kup3sDS1QJ) #geecon pic.twitter.com/rzzgq3sDNy
â Adam Dudczak (@maneo) September 11, 2015
Erik presented a few very striking examples of how diagram abstraction is implemented by real systems, we as a developers need to be aware of abstractions which surrounds us. That is why architecture (abstraction/planning) and development cannot be separated. In the conclusion Erik stated that there is no reason to maintain the role architect as we know it, because most of the activities performed by archictects can and should be handled by developers.
Summary
I’ve really enjoyed this conference from both perspectives: as an organizer and as an participant. After this gig I am a great fan of single track conferences, this requires a lot more from the organizers — to select only the best talks (which is not that hard when you deal with focused event). But you don’t have this unpleasent feeling that you are missing something interesting like one usually have with multiple tracks.
Night hacking yesterday to set-up the WiFi for #geecon #microservices. It worked finally! pic.twitter.com/08x0p8rku4
â Tomek Bujok (@tombujok) September 11, 2015
Thanks to everyone who were involved in the organization, especially to Kuba Marchwicki, Tom Bujok and MichaĆ Gruca our awesome collegues from Tricity JUG. Apart from these guys I need to mention Adrian Nowak (jesteĆ zajebisty!) and Ćukasz Stachowiak who were the most active during the organization of the event.
Yup, it is almost March and I am doing a summary of previous year. In general 2014 was great but the beginning of 2015 is a real blast so far ;-). Last year after reading “Make summaries not promises” I decided that this a way to go. So, here it goes, the most relevant events of 2014 (related to my professional life).
Changing a job
After 8 years in PSNC, I’ve changed employer. It was definitely a good decision, it was not an easy one but in general I am really glad that I’ve made it. At PSNC I had a chance to work with great people, on awesome things. I had visited several nice places including my beloved Balkans, Istambul, Lisbon and many, many others. Librarianship is not considered as something very exciting but librarians who I have met are very far from that (they are more like this ;-)).
Allegro is a completely different pair of shoes, I really like the atmosphere of this company. Sometimes I am feeling like a kid in candy shop, there are so many technological candies here, it is hard to decide which one to eat first ;-). Really great people, whole load of stuff to learn — good choice.
Conferences/JUGs
I am organizing all kinds of community events since 2007. When I am attending meetup/conference it is highly probable that I am helping to organize it in some way or another (sic!). I had a firm resolution to change that.
Nice, @maneo speaking at @Atmosphereconf pic.twitter.com/uY0IyomNha
— ags313 (@ags313) May 20, 2014
In 2014 I had an opportunity to speak at three great conferences: Devcrowd 2014, Atmosphere 2014 and last but not least Polyconf. Apart from that, thanks to my JUnit talk (who could expect that people would like to listen about good old JUnit) I’ve visited PoznaĆ JUG and TrĂłjmiasto JUG (thanks to courtesy of Jakub Marchwicki) — really nice experience.
Polyconf is something new. I was really impressed by the diversity of topics/languages and approaches which was touched during this conference. I am really looking forward to go there in 2015.
When it comes to regular conference attendence, I’ve never had a chance to participate in one more, totaly focused event. Most of the dev conferences are covering broad set of topics (depending on speakers who are available) related to all kinds of software development areas. In November I’ve participated in Lucene/Solr Revolution in Washington D.C. Conference entirely dedicated to SOLR/Lucene. I must say, that this kind of approach gives really nice results. To some extent it was an inspiration for Test Driven GeeCON, which took place in Jan 2015.
Technical reviewer
For the second time (after NetBeans 7 Cookbook) I had a chance to appear as a technical reviewer for one of the Packt Publishing books. I broke my relationship with Java EE after leaving PSNC but I was honoured when my former students and great colleagues MichaĆ MatĆoka and MichaĆ Ämil asked me to take a look at their very first book. “Java EE 7 Development with Wildfly” is a really nice compendium of knowledge about Java EE 7. If you are looking for a book which will show you testable face of enterprise flavour of Java this is the right choice. Once again congratulations for getting to an end with that guys!
Just received my copy of #java EE 7 dev. with #wildfly by @mmatloka and @michalcmil – looks really nice! pic.twitter.com/NZFTYpp51Z
— Adam Dudczak (@maneo) February 4, 2015
What I didn’t like in 2014
I gave up on running, this was the least time consuming form of sports which I know. I’ve promised myself that I will come back to this. Thankfully I am stil an active biker ;-).
2015
As I said in the beginning 2015 started with a series of very important things, our third child was born (Matylda), together with Ćukasz Stachowiak (and others at GeeCON team!) we’ve managed to organize (in just two months) Test Driven GeeCON. Looking at these things it seems that 2015 might be even better than 2014.
As one of my resolutions, I’ve promised myself to resurrect this blog (and support of course allegrotech.io) with at least 6 posts. Apart from that I hope that 2015 will be the year of open source for me. I’ve never contributed to OSS and it’s time to change that.
In my previous post I’ve mentioned that NetBeans IDE starts a background process with Derby every time when it is required during Glassfish startup. You can have a similar thing in Eclipse. After installing Glassfish AS Adapter for Eclipse, go to Window>Preferences>GlassFish preferences (or type “glass” in filter box), check “Start the JavaDB database process when Starting GlassFish server” and voila!
Recently, I had a chance to participate in organisation of PoznaĆ’s part of Global Day of Code Retreat. PoznaĆ JUG together with PoznaĆ GTUG (which stands from Google Technology User Group) have organised 2nd Code Retreat in PoznaĆ. Just to remind you, first one was held in January 2011 and resulted (apart from loads of great code) with this awesome movie (thanks to Zbigniew Wantuch).
But let’s get to the merit, our event was a part of Global Day of Code Retreat (GDCR). The idea is simple, on 3rd of December developers from all around the world joins their local Code Retreats to experiment and improve skills in the area of Test Driven Development, Pair programming and clean object design. Gathering 2000 developers and forcing them to deal with Conway’s Game of Life itself is awesome, but it seems that apart from that, Corey Haines (and others) managed to create a real, interesting community around the idea of Code Retreat – which is ultimately awesome ;-).
Thanks to courtesy of PoznaĆ University of Technology we had a really nice venue for our Code Retreat. Initially I was hoping to get more than 40 but finally we had 35 registered participants, 20 of them managed to get to the event. What’s interesting, only three or four people were attending our first CR.
Most of people who came to CR were familiar with coding in Java, but there were also a few working in Javascript and Ruby. In an announcement I wrote that knowledge about programming in Java is obligatory (as a common denominator) but pairs can work in whatever language they want. It seems that this requirement is not necessary, because mixed-language pairs had a lot of fun.
As for knowledge about TDD and experience in pair programming, most of participants knew that these techniques exist but did not have a chance to practise them in the past. So, during the first two session (and retrospectives) our great facilitators Wojtek Buras (from PSI Polska), Jakub Milkiewicz (from PoznaĆ JUG) and PaweĆ Stawicki (from Software Mill) had quite a lot to do.
We had 5 sessions:
During session everything went very smooth (at least IMHO). Apart from great coding session we had really cool retrospectives (people didn’t have any problems to speak about their problems and discuss).
When we will organise CR next time we will surely add one more session (maybe even two). I have also my personal wish list of constraints ;-):
Considering that everyone is asking when we will have next CR it seems that we had really successful workshop ;-). Pictures from CR are available here.
Big thanks to Ćukasz Stachowiak and Daniel Mendalka for their organisational effort – thanks guys!
This is a quite old post, it was not published earlier because I wanted to investigate this topic a bit more. Hope that you will find this useful (and not very misleading).
SEVERE: Exception while deploying the app [arquillian-example] : Sniffers with type [connector] and type [ejb] should not claim the archive at the same time. Please check the packaging of your archive [/..../arquillian-example]
Maybe some of you’ve experienced this kind of error while working with Glassfish. I had this one when started to work with new project (Maven driven, JEE 6 app with Arquillian/JUnit and Glassfish embedded). Project structure was taken from my previous developments, I’ve added literally one class. First deployment and this quite cryptic error message stopped my work.
While looking for some hints, I’ve learned that sniffers are analyzing archive submitted to application server. If you take a look into domain.xml in your Glassfish instance you will find that applications (applications/application) can have various sniffers associated with them e.g.
<applications> <application context-root="/test" location="${com.sun.aas.instanceRootURI}/eclipseApps/test/" name="test" directory-deployed="true" object-type="user"> <property name="appLocation" value="${com.sun.aas.instanceRootURI}/eclipseApps/test/"></property> <property name="keepSessions" value="true"></property> <property name="defaultAppName" value="test"></property> <module name="test"> <engine sniffer="jpa"></engine> <engine sniffer="security"></engine> <engine sniffer="web"></engine> <engine sniffer="weld"></engine> </module> </application>
This app has JPA/security/web and Weld. From the error message it seems that some sniffers cannot be assigned together with others.
To find out what is going on, I took a look at what is placed in the final WAR. I discovered that Maven is adding glassfish-emb-all.jar to WEB-INF/lib. Quick look at pom.xml and it looks that dependency to glassfish-emb was not properly scoped (lack of <scope>test</scope>). Without glassfish-emb in war everything went smoothly, app was properly deployed.
Oh, well just a stupid mistake in pom. But wait. What is especially interesting in this issue, it seems that sniffers assigment is done only once. This means that if you deploy your app with correct dependencies and afterwards you will accidentaly include glassfish-emb-all into your WAR everything will be fine. At least till the next change of application server which might happen when moving to production web server.
As some of you probably know I have a privilege to teach Java related stuff at PoznaĆ University of Technology. During the lab sessions which I am leading we were going through configuration of Eclipse IDE to work with JEE 6. I am using Eclipse since I started to write in Java, some things are obvious for me. We wanted to work with Glassfish, Apache Maven and Subversion which are quite common tools, so it should be quite simple to force Eclipse to work with this setup. Let’s start.
Only 24 easy steps – great! Â đ
I’m not a huge fan of NetBeans but please take a look how NetBeans IDE deals with this issue.
NetBeans will even launch Derby network server on its own if you use this DB in your project. I do not have nothing against Eclipse puzzles but it seems that it simply works!
I know why all this have to be done with Eclipse, but the question is who will bother with this if he can just launch Visual Studio đ. In fact it is just a matter of reasonable defaults, is it really so hard? Maybe JEE is too new to get proper support in Eclipse?
This is a cross-post, original post can be found here.
Tesseract is an optical character recognition (OCR) engine originally developed by Hewlett Packard, in 2005 it was open sourced under Apache license. Its development is now supported by Google. Version 3.0 was released in September 2010 apart from other things this version offers support for Polish language.
Wiki at Tesseract website is a bit messy, that is why I decided to describe my experience with building and installation of Tesseract 3.0. I was working on Ubuntu 10.10 server edition, deployed on virtual machine created using Oracle Virtual Box.
First, I’ve install build-essential and autoconf (it may be also required to install libtool):
sudo apt-get install build-essential sudo apt-get install autoconf
Next, step according to Tesseract wiki is to install dependencies:
sudo apt-get install libpng12-dev sudo apt-get install libjpeg62-dev sudo apt-get install libtiff4-dev sudo apt-get install zlib1g-dev
Please note, that the name of zlib1g-dev package is misspelled in the wiki.
I have downloaded sources of Leptonica 1.6.7 from its Google Code website and than followed rather standard build process (you may also try to install libleptonica-dev package instead):
./configure make sudo make install sudo ldconfig
The next step was downloading tesseract-3.00.tar.gz from Tesseract project website. Uncompress archive, go to tesseract-3.0 directory and invoke:
./runautoconf ./configure
After invoking ./configure you should check config_auto.h if dependencies were recognized correctly by ./configure script. Header file should contain #define for HAVE_LIBLEPT, HAVE_LIBPNG, HAVE_LIBTIFF, HAVE_LIBJPEG and HAVE_ZLIB.
make sudo make install sudo ldconfig
Without ldconfig you might experience problems with launching Tesseract.
Download languages of your choice from Tesseract website and place them (uncompress first) in your tessdata folder (by default /usr/local/share/tessdata).
Now run the OCR using:
tesseract phototest.tiff out.txt -l eng more out.txt
Hope that this was helpful.
Update (19th of October 2011):
I was trying to complie revision 627 of Tessearct on my Ubuntu 11.04, after compilation of Leptonica, and invoking ./configure for Tesseract source code I was still getting “leptonica library missing” error. Everything went smoothly after adding, these two lines in the beginning of the configure file.
CPPFLAGS="-I/usr/local/include" LDFLAGS="-L/usr/local/lib"
This solution was found here – thank you.
It took me a while, but finally I managed to publish two videos from NetBeans Platform Training which was held in January 2010 in PoznaĆ. If you are interested in NetBeans Platform Services Architecture take a look at these two great lectures delivered by Toni Epple:
More details about this remarkable event can be found at PoznaĆ JUG website.