Category Archives: web development

GWT, maven and using the standard maven src/main/webapp directory

We have changed some of our GWT projects back to using the standard maven location of src/main/webapp for the web resources. When we started these projects, the Eclipse GWT plug-in did not support that, so my colleagues insisted on using the war directory (I am an IntelliJ IDEA user/fan myself). Apart from being non-standard, this had the disadvantage that some generated content was also put in that directory cluttering the directory content and risking accident commits of these files (I know svn ignore can help, but still).

Unfortunately, the change was not as plain sailing as hoped. In fact, the gwt-maven-plugin seems to have some problems making this impossible. The advantage of open source solutions, you can create a patch :-).

Here are some excerpts from a pom configuration to make this work. To allow “mvn jetty:run” to work from a clean workspace:

            <baseResource implementation="org.mortbay.resource.ResourceCollection">
                <!-- need both the webbapp dir and location where GWT puts stuff -->

To allow “mvn gwt:run” to work from a clean workspace:

        <extraJvmArgs>-Xmx512M -Xss1024k</extraJvmArgs>

You can find the patched version of the gwt-maven-plugin as version 1.2.CPFIX on the Geomajas repo Details of the patch can be found in the their issue tracker.

GWT: 0 to 60mph in no time, Chris Ramsdale

JBossWorld Boston notes, GWT: 0 to 60mph in no time, Chris Ramsdale

From 25000ft
– toolkit, not fframework
– code in java, run as JavaScript
– one codebase, any browser
– makes AJAX a pice of cake and faster
– used in many Google projects like Google Wave and AdWords

– SDK, compiler, generator
– eclipse plug-in
– speed tracer

Focus on users
– Our users, Developers
— leverage existing IDEs and tools
— minimize refresh time between code changes
— automate where possible
– Your users, Customers
— minimize startup time
— make it a comfortable experience
— allow them to select the browser

Java to JavaScript compiler, right?
– compiling to JavaScript instead of compiling to Assembler?
– You can pretty print if needed

From code to deployment

– provide power behind your GWT app

Ajax helper

Creating UIs

– utilize common devevelopment practices
– minimize boilerplate code
– remove a few frustrations along the way

uiBinder XML with

– different ways of loading the JavaScript

Use “-gen” to display the generated code

Tips and tricks
– Reduce optimizations, reduce compile time
— -draftCompile : skip all optimizations, development only

Optimize for user
– bundle resources
– split code

interface Bla extends ClientBundle {
    public ContactCss contextCss()

    public ClientImage contextImage()

insert runAsync for code splitting

void onOkClicked(ClickEvent event) {
    GWT.runAsync(new RunAsyncCallback() {
        public void onSuccess() {

“direct” approach
Write a bunch of widgets with self-contained logic
– hard to test – need GWTTestCase
– mocks not encouraged – harder to write smaller tests
– platform specific UI code – limits code reuse
– Too any dependencies, difficult to optimize

MVP approach
– model – DTOs and business logic
– View – the display
– Presenter – application logic
– be practical
– avoid rigid patters
– put complex login in presenters

– C = view contains event logic
– P = render logic + event logic, separate view

You only have to test the model and presenter, rest is GWT itself and tested by Google :-)

Technology interoperability
– Seam

Making the cloud a reality

BeJUG talk, NoSQL with Hadoop and Hbase, Steven Noels

Notes are a little bit cryptic, but still…

NoSQL with HBase and Hadoop, Steven Noels, Bejug 17.06.2010


“An evolution drive by pain”
Various types of databases, standardized to RDBMS, further simplified to ORM frameworks

We are now living in a world with massive data stores, with caching, denormalization, sharding, replication,… There came a need to rething the problem, resulting in NoSQL.

Four trends:
– data size, every two years more data is created than existed before
– connectedness, more and more linking between data
– semi-structure,
– architecture, from single client for data, to multiple applications on data (make the db an integration hub), to decoupled services with their own back-end (not mentioned, but the next step will be integration of the back-ends)

Data management was a cost (hardware, DBA, infrastructure people, DB licenses,…)
Moving to considering data as an opportunity to learn about your customers, so you should capture as much as you can.

It is a Cambrian explosion (lot’s of evolution/new species, but only the tough/best will survive):
HBase, Cassandra, CouchDB, neo4j, riak, Redis, MongoDB,…

Some solutions may no longer exist in a couple of years, and some will become better and popular.

Common themes:
– scale, sscale, scale
– new data models
– devops, more interaction between developers, dba, infrastructure
– N-O-SQL, not only SQL
– cloud: technology is of no interest any more

New data:
– Sparse structures
– weak schemas
– graphs
– semi-structures
– document oriented

– not a movement
– not ANSI NoSQL-2010, there is no standard and it not expected there soon will be
– not one size fits all
– not (necessarily) anti-RDBMS
– not a silver bullet

NoSQL is pro choice

Use NoSQL for…
– horizontal scale (out instead of up)
– unusually common data (free structured)
– speed (especially for writes)
– the bleeding edge

Use SQL/RDBMs for…
– normalization
– a defined liability


See also Google Bigtable and Amazon Dynamo papers, Eric Brewer’s CAP theorem
discuss NoSQL papers :

Dynamo: coined the term “eventual consistency”, consistent hashing
Bigtable: multi-dimensional column oriented database, on top of GoogleFileSystem, object versioning
CAP: you can only have two out of three of “string consistency”, “high availability”, “partition tolerance”

Difference between ACID (rdmb, pessimistic, strong consistency, less available, complex, analuzable) and BASE (availability and scaling highest priority, weak consistency, optimistic, best effort, simple and fast)

Hadoop: HDFS + MapReduce, single filesystem and single execution space
MapReduce is used for analytical and/or batch processing
Hadoop ecosystem: Chukwa, HBase, HDFS, Hive, Mapreduce, Pig, ZooKeeper,…
Benefit or parallellisation, more ad-hoc processing, compartmentalized approach reduces operational risk



  • key-value stores

    focus on scaling huge amounts of data

    – vmware
    – very fast but mostly one server

    – LinkedIn
    – persistent distributed
    – fault-tolerant
    – java based

  • column stores

    BigTable clones
    sparse tables
    data model: columns->column families->cells


    – Stumbleupon, Adobe, Cloudera
    – sorted
    – distributed
    – highly-available
    – high performance
    – multi-dimensional (timestamp)
    – persisted
    – random access layer on HDFS
    – has a central master node

    – Rackspace, Facebook
    – key-value with added structure
    – reliability (no master node)
    – eventual consistent
    – distributed
    – tunable partitioning and replication
    – PRO linear scale, write optimized
    – CON 1 row must fit in ram, only pk based querying

  • document databases

    Lotus Notes heritage
    key-value stores but DB knows what the value is
    documents often versioned
    collections of key-value collections

    – fault tolerant
    – schema-free
    – document oriented
    – RESTful HTTP interface
    – document is a JSON object
    – view system is MapReduce based, Filter, Collate, Aggregate, all javascript
    – out-of-the box all data needs to fit on one machine

    – like CouchDB
    – C++
    – performance focus
    – native drivers
    – auto sharing (alpha)


  • graph databases

    data is nodes + relationships + key/value properties

    – mostly RAM centric
    – SPARQL/SAIL implementation
    – scaling to complexity (rather than volume?)
    – ‘whiteboard” friendly
    – many language bindings
    – little remoting