Wednesday, December 29, 2010

Hibernate Id Generation and Oracle Sequences

I thought that @SequenceGenerator allocationSize attribute defined the number to increment the sequence. The API says that anyway... If you forget the fact that, you can't increment the Oracle Sequences more or less than the number you specified while you created them, you may find your self in a trap.
By default you create your sequences with an increment of 1. This becomes a problem if your app. does bulk inserts. Every insert will need a select to the sequence to generate an id which in turn will hinder the performance of app.
One of the ways Hibernate tries to overcome this problem is using a simple strategy called HiLo. Basically Hibernate uses the allocationSize attribute as a multiplier to sequence value. If the sequence is 'x' Hibernate uses id's from '(x-1)*allocationSize' to 'x*allocationSize' to following inserts after a single select and an increment to the sequence.
While I think this is a feasible approach, down side is all the apps. using this sequence will have to use this algorithm to generate the id's, dealing with the fixed allocationSize, or the generated id's may collide with each other.
Instead of using sequences, using a table which stores id's is much better since you can define the increment size to the amount that you actually require. Using the following annotation and the optimizer, causes a simpler and similar behavior that API actually states. Here is the annotation;
1:    @GeneratedValue(strategy = GenerationType.TABLE, generator = "genFooTable")
2: @GenericGenerator(name = "genFooTable",
3: strategy = "",
4: parameters = {
5: @Parameter(name = "table_name", value = "SEQUENCES"),
6: @Parameter(name = "value_column_name", value = "NEXTVALUE"),
7: @Parameter(name = "segment_column_name", value = "NAME"),
8: @Parameter(name = "segment_value", value = "genFooTable"),
9: @Parameter(name = "increment_size", value = "10"),
10: @Parameter(name = "optimizer", value = "org.mca.PooledIdOptimizer")
11: })
The optimizer :
1:  public class PooledIdOptimizer extends {
2: private long value;
3: private long hiValue = -1;
4: public synchronized Serializable generate(AccessCallback callback) {
5: if ((hiValue < 0) || (value >= hiValue)) {
6: value = callback.getNextValue();
7: hiValue = value + incrementSize;
8: }
9: return make(value++);
10: }
I made a little editing with the Hibernate's original pooled optimizer since I didn't like the way it work.

Monday, December 27, 2010

Friday, November 26, 2010

About JSF

I have been using JSF for at least 5 years I think. And here I am thinking how can I make my menu item, which is a JSF component, to open up its link on a new window. I know I should check the reference document and there is probably an attribute for it but that seems too much hassle. I could have set the target attribute on a standard html anchor.
There is always an answer with JSF but it's not always as simple as it should be. Like validating (or not validating) a component could be a problem. Is it immediate ? Which form the component and the button is in ? Is ajax used ? You should consider these or your button may not work or the user may see unrelated validation messages.
I believe JSF is still good for apps where you need to process good amount of input from users, show some fancy ajax UI easily where otherwise you have to deal with JS.
The question is can't these all be simpler ?

Monday, October 25, 2010

JSF Page Fragment Caching

Problem : Our menu took ages to render.
Our application menu had links to various functions of the apps., 400+ nodes, which needed to be authorized based on EL. JSF has to execute certain EL for each node and decide if its to be rendered and than actually render the menu.
Solution : Cache it.
Simplest solution is to cache your menu. Our menu changes when the users roles changes which doesn't happen all that often so JSF doesn't have to do all that computing and since lots of user share same set of roles they can share the cached menus.
How ?
As far as I am aware Seam has a cache component. I didn't used that and I thought I could do simple component to cache with out much hassle. Here is how the tag looks like;
<mca:cache region="menu" key="#{viewUtility.getMenuKey()}">
... menu goes here ...
We need a cache region and cache key. Region will be used to specify what you are caching. Key will identify the value. In this case I used the hash code of the user's roles so that same cached will be used for different users as long as they have the same roles.
Much of the work is done on renderer here;
* User: malpay
* Date: 12.Ağu.2010
* Time: 10:53:06
public class CacheRenderer extends Renderer {

private final static Log log = LogFactory.getLog(CacheRenderer.class);

private CacheManager cacheManager;

public CacheManager getCacheManager() {
if (cacheManager == null) {
cacheManager = (CacheManager) FacesContextUtils.getWebApplicationContext(
return cacheManager;

private void replaceResponseWriter(FacesContext context) {
ResponseWriter rw = context.getResponseWriter();
CacheWriter cw = new CacheWriter(rw);

private boolean cacheNotUptoDate(CacheComponent cc, Cache cache) {
return cache.get(cc.getKey()) == null;

public void encodeBegin(FacesContext context, UIComponent component) throws IOException {
CacheComponent cc = (CacheComponent) component;
Cache cache = getRegion(cc.getRegion());
if (cacheNotUptoDate(cc, cache)) {

public void encodeEnd(FacesContext context, UIComponent component) throws IOException {
CacheComponent cc = (CacheComponent) component;
if (responseWriterReplaced(context)) {
CacheWriter cw = (CacheWriter) context.getResponseWriter();
String value = updateCache(cc, cw);

public void encodeChildren(FacesContext context, UIComponent component) throws IOException {
CacheComponent cc = (CacheComponent) component;
Cache cache = getRegion(cc.getRegion());
if (cacheNotUptoDate(cc, cache)) {
for (UIComponent child : component.getChildren()) {
} else {
char[] chars = cache.get(cc.getKey()).getValue().toString().toCharArray();"rendering cache region : " + cc.getRegion() + "" + cc.getKey());
context.getResponseWriter().write(chars, 0, chars.length);

private String updateCache(CacheComponent cc, CacheWriter cw) {
Cache cache = getRegion(cc.getRegion());
String value = cw.getValue();
cache.put(new Element(cc.getKey(), value));
return value;

private boolean responseWriterReplaced(FacesContext context) {
return context.getResponseWriter() instanceof CacheWriter;

private Cache getRegion(String region) {
if (getCacheManager().getCache(region) == null) {
return getCacheManager().getCache(region);

public boolean getRendersChildren() {
return true;
What it does is check the cache if its up to date, if not render the children as usually but save the rendered portion of the page and update the cache with it. If the cache is up to date don't render the children, simply wrote the cache on the response.
This works well on my scenario but be aware It may require some adjustments for more general use.

Thursday, May 27, 2010

Facelet tip

A quick facelet tip. If you needed an extra attribute on a component you can add and access it easily with facelets. Here I needed to add an node attribute on boolean check box, which is part of rich faces tree.
<h:selectBooleanCheckbox id="tree_cbx" value="#{tree.getNode(item).selected}"
<a4j:support ajaxSingle="true" event="onchange"
You can access the attribute easily once you have the component. Here is my ajaxListener;
public class YetkiNodeListener implements AjaxListener {

public void processAjax(AjaxEvent event) {
UIComponent holder = (UIComponent) ((UIComponent) event.getSource())

Wednesday, May 19, 2010


  • I should have started using Maven earlier. If you want to manage an inhouse repository check out the Artifactory. Ill make better use of it on my next project.
  • Hudson: More capable than CC, easily configurable UI, pluggable, integrates with svn, maven, ant... And don't forget to revert before update.
  • HTML5 Canvas looks cool. 3D Games + Drawing tools + ?. If you haven't already check this out ; , he also has 3d engine based on JS. I'll try this out may be do a project.
  • There is the Devoxx @ 15 Nov., Call for papers deadline is 6 Jun.
  • Dynamic reporting with jasper reports. This guy has a clever and basic idea of how to do it. Implemented a similar thing. Plus I added annotations to format and display the data.
  • Single Sign-On/Out with CAS.
  • Our project is still approaching the deadline. A bit tired.

Wednesday, March 24, 2010

Hibernate Usage Strategy

Lately I got to refresh my hibernate knowledge. On previous posts I wrote about my experience with hibernate here and how things could get tangled up trying to load objects.
When using hibernate most important decision you have to make is how you are going to manage sessions. Basically you have to choose from a method scoped (service) session or conversation scoped long running sessions. My choice is here is to use the first one. While using a conversation scoped session might be easier to programmer downside is that database access could happen anywhere on your application uncontrolled. I think that could end you up, having to track down some performance bottlenecks, plus it means having a big object (session) in memory for some time.
Not using a conversation scoped session forces you to determine an access strategy for your lazy associations. Note that you should still use these strategies even if you are using conversation scoped session, difference is it doesn't force you to.
Using Join Fetches
Joining usually gets omitted but it's the first thing to do. If you need lazy association, of a list of results, you have to join fetch them. Query below loads the cities with its parks:
 select c from City c join fetch c.parks
Of course if you need to list cities which don't have parks you have to use "left join fetch".
I remember reading, probably, on hibernate reference documentation something like caching is the last thing to do for performance. Now I think that I have underestimated using second level cache. Tables that hold rarely changing data should be loaded eagerly and cached. I also use a separate select to load them. This is how I mark such fields;
1:  @ManyToOne
2: @Fetch(FetchMode.SELECT)
3: CityType type;
This will force hibernate to cache the data upon first access.
Using a cache will generate much smaller queries and you wont have to write huge join statements to load lazy stuff.
I'll probably do more work using cache on an clustered environment.

Monday, February 15, 2010

What I been up to last couple of weeks

Day job kept quite busy last couple of weeks. Our new project is not managed in a agile way so after some months of preparing "big docs" we have finally hit the stage of coding. Just before starting to code, we have realised that we needed to change some of the architectural decisions we assumed and do some work on our code-base. We would have done that a lot earlier probably if we were on a agile route...
We have decided to have 5 different applications that could be plugged in and out from each-other, formed by at least three modules; ejbs, ejb clients and a gui.
I found developing an app. made up of different modules in Eclipse quite limiting. You can't configure different web.xml's or persistence.xml's for different configurations you have.
IDEA9 on the other hand, has just what I had in my mind. It has a great artifacts screen where you can control where what configuration will go, which modules compile-output will go where and what jars to copy on what directory.Things I missed from Eclipse were the subclipse and Propedit plugins. Also on IDEA the run screen strangely only had "deploy all" button. To deploy my apps. seperately I needed to use autodeploy feature or the remote server setting. IDEA9 has 30 days of free evaluation period where you can test it. For now we are still bound to Eclipse but we might choose to use IDEA in the future.
Application Server
Another major change was since we needed an Application Server for ejbs we couldnt use the developer friendly Tomcat. I had a nice experience with GlassFishV3 server while doing the JEE6 projects, but our company would probably choose BEA's Weblogic. Every AS needs some twiks. Weblogic had some class loader problem with our app. which we needed to overcome by defining a weblogic-application.xml. Details could be found here. One last thing to note is, weblogic jndi look-upsi needs the qualified class names. Like if you have a mapped name of "my/MyService" you have to look it up with; "my/MyService#interfaceName..."
GUI part of our app. is made of jsf & spring. I have used the springs LocalStatelessSessionProxyFactoryBean to lookup EJB's. It does it's job but I believe it has some missing features. It would be far more efficient, while developing, if I didn't had to restart my gui after my ejb's were deployed. On that case instead of performing the lookup again spring just throws an exception, though I think I could find a solution that does that.
Thats what I do mostly here. I started getting bored with that... From now on I think I will mostly just hit people behind their neck to point them to right direction.

Monday, January 11, 2010

Multipart Requests and JSF

Multipart requests are used when client needs to upload files to server. A multipart request has a different encoding and requires to be parsed.
JSF2 doesn't come with any fileupload component nor it has any support with multipart requests. A multipart request to JSF (mojarra impl. 2.0.2) just executes the 'restore view' and 'render response' phases, simply because JSF can't extract the request parameters from the multipart request.
Here is what I did to implement a file upload scenerio in JSF:
Define the view
I defined my view a form with multipart encoding and file input:
1:      <h:form enctype="multipart/form-data">
2: User : <h:inputText value="#{join.userName}"/>
3: Wall : <h:inputText value="#{join.wallName}"/>
4: Avatar : <input id="avatar" name="avatar" type="file" />
5: <h:commandButton value="Login" action="#{join.create}"/>
6: </h:form>
Wrap & Parse The Request
I used commons-fileupload to parse the request and used CDI event dispatching to assign the uploaded file to beans :
1:  public class MultipartRequestWrapper extends HttpServletRequestWrapper {
3: private Hashtable<String, String[]> params = new Hashtable<String, String[]>();
5: MultipartRequestWrapper(HttpServletRequest request, Event<FileItem> uploadEvents) {
6: super(request);
7: DiskFileItemFactory factory = new DiskFileItemFactory();
8: factory.setSizeThreshold(2097152);
9: ServletFileUpload upload = new ServletFileUpload(factory);
10: upload.setSizeMax(2097152);
12: try {
13: List<FileItem> items = upload.parseRequest(request);
14: for (FileItem item : items) {
15: if (item.isFormField()) {
16: params.put(item.getFieldName(), new String[]{new String(item.get())});
17: } else {
19: }
20: }
21: } catch (FileUploadException e) {
22: throw new RuntimeException(e);
23: }
24: }
26: @Override
27: public String getParameter(String name) {
28: String [] values = getParameterValues(name);
29: if(values == null || values.length == 0) {
30: return null;
31: }
33: return values[0];
34: }
36: @Override
37: public Map<String, String[]> getParameterMap() {
38: return params;
39: }
41: @Override
42: public Enumeration<String> getParameterNames() {
43: return params.keys();
44: }
46: @Override
47: public String[] getParameterValues(String name) {
48: return params.get(name);
49: }
51: }
Line 16 parses and extracts the request parameters and and line 8 publishes the file item to the listening beans. A simple filter checks the request and if it's a multipart request it wraps the request with this wrapper.