logo

How to make your small team more efficient

We’re a small Java shop of three, offering a SaaS-based e-commerce solution for dozens of customers. The challenges of both e-commerce and SaaS can sometimes seem like minefields given the plethora of competitors, growth, and perpetual change. This post details the factors that I believe have contributed to our success so far.

The technical details
The first version of our e-commerce solution went live around 2000. Implemented in classic ASP, there was no established alternative available as everything on the Internet back then was a result of blood, sweat, and tears.

Our e-commerce solution evolved from a prototype for a single instance to a multi-tenant, full-featured product that allows for a large degree of customization. As with all things still evolving, some… let’s call them flaws… snuck their way into our source code one way or the other. Having our code continue to evolve for over a decade after that point didn’t improve the situation.

About three years ago, we decided that our (still ASP) engine was in dire need of a major overhaul. And even then, with a plethora of alternatives now available, we still decided to implement our own homegrown engine.

Customizing an existing solution saves time, no doubt. On the other hand, it forces you into someone else’s release cycles and the day always comes when you’d prefer to stay on an old version, but really want a feature that’s only available in a newer version. As we prefer solving problems that we’re responsible for ourselves, instead of getting in trouble because of someone else’s software, we decided for “build” instead of “buy.”

We decided for Java as our central langage de programmation because we were able to leverage existing knowledge within our team. Plus the toolset for continuous integration and continuous deployment seemed to be more advanced than that of PHP, for example. Besides that, having tinkered with ASP for about a decade already, we looked forward to static typing and compile-time errors. This was a very personal choice and to this day I’m still happy with our decision.

Was Java (or ASP, in regards to the past) the reason for our success? No, probably not. At least, not directly. The reason why we’ve succeeded with our product so far is that all along we’ve used a programming language we all know well and like working with every day.

Our tools
Developing in ASP was fine. You changed one thing and had it available online immediately (*cough*). That was fine years ago, but with multi-tenancy and some unwanted side-effects between deployments popping up from time to time, we wondered if a little more of a “process” between development and deployment wouldn’t finally help our platform more than it impeded our development efforts.

Sometimes the build system is all your “process” consists of
We used Maven (YMMV) from day one onward, and thus were perfectly able to deploy our solution automatically. Getting it up and running correctly took quite a bit of work though. I can’t say if it would have been any easier with Gradle. If I asked three people about this, I’d get five different answers. We work in a hype-based business much more than in a cloud-based business, so we try not to get excited about things “just because.” If something works, we try to not break it by “improving” it.

Automated testing makes our day
We rely heavily on Selenium for test automation. Before publishing changes, we’re able to have a tenant click through automatically. Even automated tests take lots of time, so we just can’t do this manually with the manpower we have.

We started with Jenkins and we’re still using it today. It’s probably not the best solution, as some aspects of it are tedious to configure, but it gets the job done. Plus, it’s free, which was one of the main decision factors when we picked it. We didn’t even try other tools because Jenkins was the only one that was free, which brings me to another major point:

Which tools are worth paying for?
We’re mainly using free tools and frameworks (in addition to those I mentioned already):

  • Java
  • Spring Framework (including all kinds of sub-projects)
  • jQuery
  • MongoDB
  • Citrix XenServer
  • Eclipse IDE

And we’re using some paid software:

  • Windows Pro on our developer machines
  • Windows Server
  • Microsoft Exchange
  • Microsoft Office
  • JIRA
  • Confluence
  • ruxit

I used to believe that free software was better than pay software because we’re developers and we can make anything fit our needs. While this is true and we can do this, it’s not always what we want to do.

We still use Jenkins, for example, because now it does what we expect it to do. It took us quite some coding to get there, however. That’s coding time that we could have invested otherwise into our product.

Jenkins does the job, but required lots of additional work in our environment

Jenkins does the job, but required lots of additional work in our environment

Of course, if we were to start over today, we would evaluate all the premium tools available for continuous integration and deployment and possibly decide on different tooling options. Why? Because while working on other projects—occasionally in cooperation with other companies—we’ve encountered some paid tools like JIRA and Confluence and learned that they provide more value out-of-the-box than their free counterparts.
Once we came to see the value of some “premium” offers, we began to consider other paid software. For example, we’ve opted to pay for the Atlassian products because they work better for us than the free office software suites that are available.

Our tools are definitely one of the factors that influence how much time we’re able to spend with our customers. And so we’ve learned an important lesson:

Free is never really free
JIRA and Confluence are premium tools. There are alternatives, but the free alternatives either aren’t able to fulfill our needs or they would require us to invest too much time (i.e., money) into setting them up. Atlassian offers competitive pricing for businesses of our size that make it beneficial for us to use Confluence and JIRA.

JIRA’s functionality is far superior to that of free tools

JIRA’s functionality is far superior to that of free tools

When we heard about ruxit, it was the advantages we considered first, not the pricing. We used to use Nagios to monitor our infrastructure. Out of the box, Nagios does absolutely nothing but display current metrics. No historical data. Not even one second in the past. Every other host you add and each other metric you want to monitor requires you to deep dive into the configuration files. Recording historical data requires you to install a plug-in. Nearly everything you really need requires you to install a plug-in and configure it for every single host in your environment.

ruxit in turn requires you to just install a single agent per host. Once it’s running, the data starts popping up on your dashboard. Besides that, you don’t need a dedicated host for persisting the monitoring data. Data is stored in the cloud. So you just open your dashboard and have everything there. No need to manually add new processes, etc.

ruxit’s auto-detection enables us to always have a live view on our environment

ruxit’s auto-detection enables us to always have a live view on our environment

Another great thing, that makes ruxit worth the money is the automated baselining of performance. You don’t set a warning threshold on your CPU, for example. ruxit keeps track of the usual behavior and recognizes when performance goes south. If something fails, ruxit connects the dots for you, so you don’t get five notifications when CPU maxes out (which leads to slowed-down services everywhere). You get a single notification.

I’m not going to go any deeper into this topic. The point is, some tools are absolutely worth the money. They save us time, which allows us to handle dozens of customers with a really small team.

Our business
Technology and tooling decisions are one thing. I think the main reason we’ve succeeded is because of the e-commerce value we add to our customer’s businesses.
We not only host their e-commerce platforms, we also guide them through the set up of SEO-optimized titles and descriptions for their products. We work with them closely, so they can concentrate on their core business rather than pleasing a search engine.

In a way, we’ve succeeded because we offer to our customers what we ourselves expect from the tools we pay for. If JIRA, Confluence, or ruxit weren’t any better than the available free tools, we wouldn’t be willing to pay for them. And like each of these SaaS-based solutions, we also offer dedicated support to our customers. Taking care of customer needs on a personal level is definitely a major success factor for us.

Of course, there’s also still that special ingredient that’s the foundation of all successful businesses: your product idea. Unfortunately, I’m not allowed to talk about that, because, you know…

Conclusion
Have a team that is convinced of the worth of your product. Have a product that makes it possible (if not easy) to convince your customers of its value. And choose your tools wisely—the value they add to your daily life can really make or break you.

And finally: keep going. We lost potential customers early on because they were convinced that our solution wasn’t worth the money compared to the available free e-commerce solutions. Some customers came back to us later saying they weren’t aware of how much work our paid solution would have saved them. Some customers we never heard from again. As long as you have faith in your solution, there will be people out there who want to hear about it.

And don’t understate the benefits of your product. If you can add value for your customer, tell them about it.

multi-page tiff handling with ImageIO

Who has ever had the task to serve image/tiff images via http will sooner or later come across this issue.

“The image won’t automatically open on windows systems.”

Sure one could come up and alter registry HKEY_CLASSES_ROOT\.tif on Windows systems but IMHO this is not an acceptable solution. Assume you’ve got multipage tiff (eg. automatically generated by some scan device) and the only thing the user is interested in is viewing the content online, not caring about raw data format.

One way would be wrapping image raw data with a pdf document having one image per page. Which is actually what we’re goint to do in this post.

All we need are 2 libraries and a little bit of glue code.

dependencies {
  compile group: 'com.sun.media', name: 'jai_imageio', version: '1.1' 
  compile group: 'com.lowagie', name: 'itext', version: '4.2.0'
}

First one will add required extensions to javax.imageio.* as by default there are no Image Readers for tiff. Second one is used for PDF creation.

So what, we’re going to do is

  1. Load image reader for desired format (tiff in our case)
  2. Load image from resource
  3. Read image contents
  4. Scale image to fit into page and add it to PDF

The snipplet below will do the trick.

public void writeMultipageTiffAsPdfToStream(URL imageUrl, OutputStream outputStream) throws IOException, DocumentException {
  //get image reader for tiff
  ImageReader reader = getTiffImageReader();
  reader.setInput(openImageInputStream(imageUrl));

  //create A4 PDF Document and write content to stream
  Document doc = new Document(PageSize.A4, 0, 0, 0, 0);
  PdfWriter.getInstance(doc, outputStream);
  doc.open();

  //add one page to pdf for each image in multipage tiff
  int pages = reader.getNumImages(true);
  for (int imageIndex = 0; imageIndex < pages; imageIndex++) {
    //read image at index
    BufferedImage bufferedImage = reader.read(imageIndex);
    //convert to pdf document image
    Image image = Image.getInstance(bufferedImage, null);
    //scale to fit to page
    image.scaleToFit(PageSize.A4.getWidth(), PageSize.A4.getHeight());
    doc.add(image);
  }
  
  doc.close();
}

private ImageReader getTiffImageReader() {
  Iterator<ImageReader> imageReaders = ImageIO.getImageReadersByFormatName("TIFF");
  if (!imageReaders.hasNext()) {
    throw new UnsupportedOperationException("No TIFF Reader found!");
  }
  return imageReaders.next();
}

private ImageInputStream openImageInputStream(URL url) throws IOException {
  return ImageIO.createImageInputStream(url.openStream());
}


Christoph Strobl is Sortware Architect at pagu.at mainly working on Enterprise Solutions for an Austrian Telecommunication Company. He’s passionate about architecture, design, implementation and testing having special interest in topics related to spring, web and search.

ibatis to jpa with spring-data

When dealing with persistence its reasonable to fully embrace the framework of your choice. But as time goes by requirements change, people change, frameworks change and maybe the choice of yesterday might no longer be the choice for tomorrow.

Let’s try to find a way to replace an ibatis based persistence with JPA using Spring Data JPA.

What we have so far is an application with some DAOs defined via Interfaces and their according implementation using SqlMapClientDaoSupport.

public interface TextDao {
  Text getTextById(String key);
  List<Text> getAll();
}

So first thing we have to do in order to get going with a JPA implementation is to make the Entity JPA ready by adding @Entity and providing persistence metadata like table name and id Attribute via @Table and @Id.

@Entity
@Table(name="application_text")
public class Text implements Serializable {

  @Id
  @Column(unique = true)
  private String key;

  private String text;

  //getters, setters omitted
}

Now that we finished the Entity we are ready to introduce a new interface extending the current one as well as spring data Repository. Now that we’ve more than one definition of TextDao we use @Qualifier to distinguish between them.

@Qualifier(value="textRepository")
public interface TextRepository extends TextDao, Repository<Text, String> {
}

Well, that was not much of a big deal. Next we’ve to extend the current DAO interface and add some @Query as the Queries cannot automatically be derived from the method name.

public interface TextDao {
  @Query("select t from Text t where t.key = ?1")
  Text getByKey(String key);

  @Query(value="select t from Text t")
  List<Text> getAll();
}

To enable JPA support via XML Configuration the following lines have to be added to the application context XML. Of course the configuration can be done via @Configuration too, which is recommended, but we’ll stick with XML for now.

<jpa:repositories base-package="com.acme.repository" />

<bean id="entityManagerFactor"
  class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
  <property name="dataSource" ref="applicationDataSource" />
  <property name="jpaVendorAdapter">
    <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
      <property name="generateDdl" value="false" />
      <property name="database" value="HSQL" />
    </bean>
  </property>
</bean>

<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
  <property name="entityManagerFactory" ref="entityManagerFactory" />
</bean>

Don’t forget to register the JPA provider of your choice within persistence.xml.

<?xml version="1.0" encoding="UTF-8"?>
<persistence xmlns="http://java.sun.com/xml/ns/persistence" version="1.0">
    <persistence-unit name="APP">
        <provider>org.hibernate.ejb.HibernatePersistence</provider>
    </persistence-unit>
</persistence>

As we are lucky and have some integration tests in place we’ll be able to verify the modifications quite fast. Once again, we use @Qualifier to retrieve Spring Data Repository instead of Ibatis implementation.

public class TextDaoTest {

  @Autowired
  @Qualifier("textRepository")
  private TextDao textDao;

  @Test
  public void testGetAll() {
    List<Text> all = textDao.getAll();
    assertEquals(3, all.size());
  }

  @Test
  public void testGetByKey() {
    Text text = textDao.getByKey("existingKey");
    assertEquals("existingKey", text.getKey());
    assertEquals("someText", text.getText());
  }

  @Test(expected = ObjectNotFoundException.class)
  public void testGetInvalidByKey() {
    textDao.getByKey("keyThatDoesNotExist");
  }
}

2 of 3 Tests pass at first run. The third one fails as it expects to get an ObjectNotFoundException which is not the default behavior in spring-data-jpa, where null is returned by default.

A brief look at the old implementation reveals the problem.

@Override
public Text getByKey(String key) {
  Map<String, Object> parameter = new HashMap<String, Object>();
  parameter.put("key", key);

  Text result = (Text) getSqlMapClientTemplate().queryForObject("Text.getByKey", parameter);
  if (result == null) {
    throw new ObjectNotFoundException("The text entry with key " + key + " could not be found in the database.");
  }

  return result;
}

To solve this, we have to dig a little deeper into the API providing a custom implementation for getByKey by creating a new Interface as well as the according implementation following the naming convention of Spring Data Commons.

public interface TextRepositoryCustom {
  Text getByKey(String key);
}

public class TextRepositoryImpl implements TextRepositoryCustom {

  @PersistenceContext
  private EntityManager em;

  @Override
  @Transactional(readOnly = true)
  public Text getByKey(String key) throws IllegalArgumentException {
    Text result = em.find(Text.class, key);

    if (result == null) {
      throw new ObjectNotFoundException("The text entry with key " + key + " could not be found in the database.");
    }

    return result;
  }

  public void setEntityManager(EntityManager em) {
    this.em = em;
  }

}

Nice, but this did not solve the problem fully. The code is executed and ObjectNotFoundException is raised but springs internal exception translation mechanism converts the Exception into an UncategorizedDataAccessException wrapping ObjectNotFoundException.

But,…
there is around this issue by writing and registering a custom PersistenceExceptionTranslator.

public class LegacyExceptionTranslator extends HibernateExceptionTranslator   {

  @Override
  public DataAccessException translateExceptionIfPossible(RuntimeException ex) {
     DataAccessException translatedException = super.translateExceptionIfPossible(ex);
     if(translatedException == null || translatedException instanceof UncategorizedDataAccessException) {
       throw ex;
     }
     return translatedException;
  }

}
<bean id="exceptionTranslator" class="com.acme.core.LegacyExceptionTranslator"/>

All exceptions that can be translated will still be converted but others like our own ObjectNotFoundException will be passed on.

All green now. Ready to remove ibatis implementation :)



Christoph Strobl is Software Architect at pagu.at mainly working on Enterprise Solutions for an Austrian Telecommunication Company. He’s passionate about architecture, design, implementation and testing having special interest in topics related to spring, web and search.