Friday, 23 August 2013

Running Mule on Openshift - Part 2

A little while back I wrote a post on "Running Mule Natively on OpenShift" - This post showed some workarounds for a few conflicting "features" of both products and showed how to run a simple Hello World app. Since then both products have moved up major versions with both fixing the need for some of the previous work-arounds and also introducing some new ones. I have also had time to expand on a simple Hello World app and start trying out some other Mule components which also require some hacks to get up and running. In this article I'll do a quick recap of getting the app up and running and then show how to work with these new features.

HTTP Server Binding

In the previous post we tried to deploy the following app to Mule 3.3.1:

The only thing to take note of here is that we are using the ${OPENSHIFT_DIY_IP} environment variable which has changed from the previous ${OPENSHIFT_INTERNAL_IP}. This is the suggested IP address for creating socket connections on localhost. Typical values like "localhost", "" and "" are all locked down.

However, if you try using this environment variable as your host you will get an error similar to the following:

Permission denied ( (null) 2. Failed to bind to uri ""

After digging through the source, there is a slight issue with Mules' TCP transport for versions < 3.4.

Here, the internal IP is a loopback address, so Mule forces it down the path of creating a TCP socket that listens on all interfaces for that port. Fortunately there is already af fix in the 3.4 release which is now GA. If you want to use Mule versions < 3.4 then you will need to modify the TCP transport as detailed in the previous post.

HTTP Client Binding

The previous app just focused on exposing an inbound endpoint over HTTP and didn't look at using outbound endpoints. Take the following modified Hello World application:

Feeling a bit european, this modified app simply contacts an external HTTP API to request "Hello World" in Italian. Here, the http:outbound-endpoint shouldn't need to bind to any internal port as it's not necessary for client TCP connections. However, when running the application you will still get: Permission denied ( Under the hood, Mule uses the Apache commons-httpclient3 library for client connections which in turn uses uses it's DefaultSocketFactory class to create objects. The class has several different constructors, and commons-httpclient by default will call:

public Socket(String host, int port, InetAddress localAddr, int localPort) throws IOException...

Here, for the "localAddr" argument, null is passed. When null is passed for the "localAddr" argument, the will use InetAddress.anyLocalAddress(); This will perform a bind on the localAddr/port which is not necessary for client TCP connections, although under most circumstances is harmless. Except on OpenShift where this is specifically locked down and results in the being thrown.

This is rectified in later versions of the library which are now refactored into the Apache HttpComponents library and the HttpClient Inerfaces have changed signigicantly resulting in a non-trivial upgrade path. There is a solution however, with thanks to mikebennettjackbe - to create a custom SocketFactory to override the creation of the Socket and not bind on any local address:

To make commons-httpclient use the new OSProtocolSocketFactory instead of its own, we need to regiser it when he app starts up. One approach to this, is we can implement the MuleNotificationListener interface:

This class allows us to check when the Mule context is initialised and register the new Socket factory for the http and/or the https protocols. After creating the above class, you just need to define it as a bean within your Mule config, like so:

And that's it. You should now be able to use http outbound endpoints and use connectors that rely on the Apache commons-httpclient3 library such as the Twitter Cloud Connector. There are more than likely other transports or connectors that have issues around this area so feel free to comment or raise an issue or pull request on Github. Full project can be found here:

Monday, 4 March 2013

Natively Running Mule on OpenShift

In this article I'll show how you can run Mule natively on OpenShift wihout using a Servlet container and show you how I got over a few implementation hurdles.

If you are familiar with Mule you know it gives you many deployment options including both standalone deployment or embedding itself within a Java application or Webapp. The recommended approach is to run Mule ESB standalone from the command prompt, as a service or daemon, or from a script. This is the simplest architecture, so it reduces the number of points where errors can occur. It's typically best for performance as well, since it reduces the number of layers and eliminates the inherent performance impact of an application server on the overall solution. With Mule 3.x, you can also now run multiple applications side by side in a Mule instance using the new deployment model can support live deployment and hot redeployment of applications. Lastly, standalone mode has full support for the Mule High Availability module and the Mule management console.

OpenShift gives you many choices for developing and deploying applications in the cloud. You can pick among PHP, Ruby, Perl, Python, Node.js or Java. As Mule is Java based, we are pretty much covered. OpenShift provides an end to end Java application stack including: Java EE6, CDI/WeldSpring and Spring. You can choose between multiple application server's for Webapps including JBoss AS7, JBoss EAP6, Tomcat, and GlassFish. But if you want to run Mule natively in standalone mode for the aforementioned benefits, you will need to create a "DIY" cartridge/application.

A "DIY" application is just a barebones app, with no server preloaded, ready to be tailored to your needs. With this app type, OpenShift is begining to blur the line between an IaaS and a PaaS, providing you with a controlled and scalable environment, and at the same time giving you the freedom to implement the technology that best suits your needs.

Getting Started

Before attempting to create a DIY application on OpenShift, you should familiarize yourself with the technology you are about to use. You should have a clear understanding of the steps needed to set it all up on your workstation and then reproduce it on OpenShift.

For a Mule application we won't need JBoss, or any application sever, not any servlet container at all. We just have to install Mule and start it up.

Doing this on your own workstation is as easy as downloading Mule, unzipping it, and then running:

And to stop Mule:

Now we'll have to do the same on our server at OpenShift. First, let's create a new application named "mule":

Now let's see what we created. Running the following script:

Should output similar to he following:

You can browse to to see the default index page running. It's just the same static page you can find at raw/index.html

Now let's see what we have in our repo:

It's a pretty barebone app, but there's a folder that's quite interesting for us - .openshift/action_hooks:

These are the scripts that OpenShift uses for building, deploying, starting and stopping our app. These scripts are executed on the remote OpenShift server. These are the scripts we will need to ammend to download Mule and perform any configuration as well as starting and stopping our Mule Server. Let's take a look at a simplified version of the script that we used to install Mule.

Installing Mule


The pre_build script is used for downloading the required Mule installation and unzipping it.


Then to start the Mule server:


And to stop the Mule server:

Upgrading the Java Service Wrapper

When you run the mule command, it launches the mule.bat or script in your MULE_HOME/bin directory. These scripts invoke the Java Service Wrapper. The Java Service Wrapper from Tanuki Software is s fancy, little tool which helps with managing your application and JVM it is running in. By default it uses sockets to communicate back and forth with the JVM. But OpenShift is very restrcitive on what IP's and ports you are allowed to listen on.

By default the current Mule 3.3.1 release uses version 3.5.7 of the Java Service Wrapper. If you try running the default Mule instalation on OpenShift, you will get the following error:

"unable to bind listener to any port in the range 32000-32999. (Permission denied)"

The Java Service Wrapper is controlled by a wrapper.conf file that can be found in you MULE_HOME/conf directory and has a host of configuration of options, including setting the rang eof ports that the wrapper can listen on. Ports aside, OpenShift only allows applications to bind on a specific IP address via the environment variable OPENSHIFT_INTERNAL_IP. Unfortunately there is no configuration option to override this IP address. Game Over!

Extra Life! In a later version of the wrapper, there is a new configuration option: wrapper.backend.type=PIPE to allow you to avoid using sockets and use pipes instead to get around this problem.

To upgrade the wrapper we simply download the later wrapper libraries and replace them within the MULE_HOME/lib directory.


To update the wrapper.conf file with the new configuration. We take a copy of the original wrapper.conf file, ammended to contain the wrapper.backend.type=PIPE option and includ it within our git repo so that we can replace the original when building the instalation.


Deploying a Mule application

Deploying the application is as simple as copying a Mule application archive to the required appsdirectory:


Where is a simple Mule applicaion exposed over HTTP that returns "Hello World".

HTTP Binding

The only thing to take note of here is that we are using the ${OPENSHIFT_INTERNAL_IP} environment variable. This is the suggested IP address for creating socket connections on localhost. Typical values like "localhost", "" and "" are all locked down.

However, if you try using this environment variable as your host you will get an error similar to the following:

Permission denied ( (null) 2. Failed to bind to uri ""

As you can see; the internal IP resolves fine and we are using 8080 which is the suggested port for HTTP connections, but still no dice.

Hacking the TCP transport

After digging through the source, there is a slight issue with Mules' TCP transport.

Here, the internal IP is a loopback address, so Mule forces it down the path of creating a Socket that listens on all interfaces for that port. Fortunately there is already af fix in the upcoming 3.4 release - MULE-6584: HTTP/ TCP bound to listens on all interfaces.

Unfortunately, it's only upcoming at the moment. So instead I have ammended the source of this transport myself for the same functionality and included the resulting jar as part of my diy project to replace the orignal transport jar.


And that's it!

If you now take a look at your app at: you should now see "Hello World"! The full diy project for this with instructions can be found on GitHub:

Sunday, 3 February 2013

Cross-Origin Resource Sharing with Mule, AJAX and JavaScript

The Same-Origin Policy

The Same-Origin policy is a security policy enforced on client-side web apps to prevent interactions between resources from different origins. While useful for preventing malicious behaviour such as XSS(Cross Site Scripting) attacks, this security measure also prevents useful and legitimate interactions between known origins.

For example, your new awesome JavaScript mashup hosted at might want to use a REST API hosted at However, because these are two different origins from the perspective of the browser, the browser won't allow a script from to fetch resources from because the resource being fetched is from a different origin.

Cross-Origin Resource Sharing

Fortunately, there is a solution via Cross-Origin Resource Sharing(CORS). The CORS spec was developed by the World Wide Web Consortium (W3C) to support this very case. It's a working draft but is already supported by the majority of web browsers, probably including the very browser you are using to view this page. The full specification can be found at: and supported browsers can be found here:

How CORS works

CORS works via a set of HTTP headers in the request from the client app and the response from the requested resource. In it's simplest form; the requesting application specifies a Origin header in the request which describes the origin of the request and the requested resource will reply intern with an Access-Contol-Allow-Origin header indicating specific origins that are allowed to access a particular resource.

Request headers: Response headers:

There are more complicated scenarios that require additional HTTP headers when using non-simple HTTP headers. More information on this can be found here: For the purposes of this post we will just be using simple headers.

Using CORS with the Mule HTTP transport

To demonstrate CORS in action, I'll show a simple JavsScript client app using JQuery to access a simple HTTP service in Mule.

Simple JQuery Client

Simple HTTP Mule Flow

This is just a simple JQuery client and a simple HTTP Mule flow returning some plain text: "Hello World". The most important part here is the set-property element. Here we are setting the HTTP header to be returned in the response. Simple right? We have just set the value to "*" indicating that any origin is allowed. This can be configured as needed to include specific origins if you so desire.

Using CORS with the Mule AJAX transport

On top of the HTTP transport in Mule, there is also a specific AJAX transport. The Mule AJAX transport allows Mule events to be sent and received asynchronously to and from the web browser.

You might think that you would be able to se this property the same way. Unfortunately, no. Under the hood; the AJAX transport uses Jetty and the CometD libraries to provide the long-polling functionality and currently do not propagate HTTP headers set in Mule and instead set their own.

Never fear, there is a solution. It's a little more long winded, but still simple none the less. The solution relies on Jetty's configuration, which is used by the AJAX transport when running in embedded mode. This configuration can be overrided within your Mule application by provided a custom Jetty XML configuration file and creating custom Handler to add new HTTP headers.

Simple JQuery CometD client

To start let's amend the original client application to use CometD to subscribe to a channel in Mule.

Mule AJAX Flow

The Mule flow just polls every ten seconds and publishes a message to an AJAX outbound endpoint.

In addition to the standard AJAX connector configuration, we are injecting a reference to a custom jetty configuration file to register our CORS handler.

Jetty Configuration

This is just a simple jetty configuration file that we referenced in the previous Mule configuration to register our new custom Handler. The most important part here is the class reference that will be our new Handler to add the required headers: org.oreilly.mulecloudconnect.CORSHandler

Custom CORS Handler

And finally the last part to our CORS puzzle is the custom Handler itself. This class is an extension of the org.mortbay.jetty.handler.AbstractHandler class that gives us access to the Servlet request and response. In this example we are simply adding the Access-Control-Allow-Origin header to the HttpServletResponse. But again, you can customize this to add specific origins and so on.

And that's it. Happy mashing!

Friday, 1 February 2013

Getting Started with Mule Cloud Connect: Accelerating Integration with SaaS, Social Media, and Open APIs - Sample Chapters

Print Edition Now Available!

With the number of open APIs reaching over 13,000 this year according to APIhub, 2013 will all be about how developers orchestrate APIs to create applications. Mule Cloud Connect is here to help. For those looking to get started with Mule and Mule Cloud Connect or even just working with APIs, my latest O'Reilly book will get you up and running: If you're not already convinced, here are the first 2 chapters to get you started

Local WebHooks with Mule Cloud Connect and LocalTunnel v2

When using an external API for WebHooks or Callbacks as discussed in Chapters 3 and 5 of Getting Started with Mule Cloud Connect; The API provider running somewhere out there on the web needs to callback your application that is happily running in isolation on your local machine. For an API provider to callback your application, the application must be accessible over the web. Sure, you could upload and test your application on a public facing server, but you may find it quicker and easier to work on your local development machine and these are typically behind firewalls, NAT, or otherwise not able to provide a public URL. You need a way to make your local application available over the web.

There are a few good services and tools out there to help with this. Examples include ProxyLocal, and Alternatively, you can set up your own reverse SSH Tunnel if you already have a remote system to forward your requests, but this is cumbersome to say the least. I find Localtunnel to be an excellent fit for this need and localtunnel have just recently released v2 of its service with a host of new features and enhancements. More information can be found here:

Installing Localtunnel

Those familiar with version 1 of the service will know that the v1 Localtunnel client was written in Ruby and required Rubygems to install it. The v2 client is now written in Python and can instead be installed via easy_install or pip.

If instead you're interested in using Localtunnel v1, then I have wrote a previous blog post on the subject here:

To get started, you will first need to check that you have Python installed. Localtunnel requires Python 2.6 or later. Most systems come with Python installed as standard, but if not you can check via the following command:

$ python -version

More info on installing Python can be found here:

Once complete, you will need easy_install to install the Localtunnel client.If you don't have easy_install after you install Python, you can install it with this bootstrap script:

$ curl | python

Once complete, you can install the Localtunnel client using the following command:

$ easy_install localtunnel

First run with LocalTunnel

Once installed, creating a tunnel is as simple as running the following command:

$ localtunnel-beta 8082

The parameter after the command: "8000" is the local port we want Localtunnel to forward to. So whatever port your app is running on should replace this value. Each time you run the command you should get output similar to the following:

Port 8082 is now accessible from ...
Note: As v2 is still in beta; the command local-tunnel-beta will eventually be installed as just localtunnel. This lets you keep the v1 just in case anything goes wrong with v2 during the beta.

Configuring the Connector

Now onto Mule! To demonstrate I will use the Twilio Cloud Connector example from Chapter 5. Twilio has an awesome WebHook implementation with great debugging tools. Twilio uses callbacks to tell you about the status of your requests; When you use Twilio to a place a phone call or send an SMS the Twilio API allows you to send a URL where you'll receive information about the phone call once it ends or the status of the outbound SMS message after it's processed.

This example uses the Twilio Cloud Connector to send a simple SMS message. The most important thing to note is that the "status-callback-flow-ref" attribute. All connector operations that support callback's will have an optional attribute ending in "-flow-ref". In this case : "status-callback-flow-ref". As the name suggests, this attribute should reference a flow. This value must be a valid flow id from within your configuration. It is this flow that will be used to listen for the callback.

Notice that the flow has no inbound endpoint? This is where the magic happens; when Twilio process the SMS message it will send a callback automatically to that flow without you having to define an inbound endpoint. The connector automatically generates an inbound endpoint and sends the auto generated URL to Twilio for you.

Customizing the Callback

The URL generated for the callback URL is built using 'localhost' as the host, the 'http.port' environment variable or 'localPort' value as the port and the path of the URL is typically just a random generated string or static value. So if I run this locally it would send Twilio my non public address, something like: http://localhost:80/...vv3v3er342fvvn. Each connector that accepts HTTP callbacks will provide you with an optional http-callback-config child element to override these settings. These settings can be set at the connector's config level as follows:

Here we have amended the previous example to add the additonal http-callback-config configuration. The configuration takes three additional arguments: domain, localPort and remotePort. These settings will be used to constuct the URL that is passed to the external system. The URL will be the same as the default generated URL of the HTTP inbound-endpoint except that the host is replaced by the 'domain' setting (or its default value) and the port is replaced by the 'remotePort' setting (or its default value).

In this case we have used the domain from the URL that Localtunnel generated for us earlier: and set the localPort to 8082 as we run the Localtunnel command using port 8082 and the remotePort to 80 as the localtunnel server just runs on port 80.

And that's it! If you run this configuration you should start seeing your callback being printed to the console. The same goes for any OAuth connectors too. If your using any OAuth connectors built using the DevKit OAuth modules, you can configure the OAuth callback in a similar fashion.

A full Mule/Twilio WebHook project can be found here: