DoTaL

My repository for development tidbits, and a few personal anecdotes

Informatica sources and sorted ports

Posted by doubleA on January 21, 2009

Informatica supports sorting in the Source Qualifier in addition to a sorter transformation. This came up with I was working on updates to an existing mapping that outputs data to a CSV file. The output file is sorted by date and the sort is controlled in the SQ (not a sorter transformation). Sorting in the SQ will generally be more performant because the sort is done at the database level.

It’s very simple to sort the source data since Informatica adds the sorted SQ ports to the generated SQL statement. The only caveat is that you have to put your sorted ports at the top of the list of ports in the SQ as mentioned in the Informatica help.

    From the Informatica Help:

Source Qualifier->Properties->Number Of Sorted Ports
Indicates the number of columns used when sorting rows queried from relational sources. If you select this option, the Integration Service adds an ORDER BY to the default query when it reads source rows. The ORDER BY includes the number of ports specified, starting from the top of the transformation. When selected, the database sort order must match the session sort order.

If you select 4 as your number of sorted ports, Informatica is going to generate an order by clause that uses the first 4 ports in your SQ transformation.

Posted in Informatica | Leave a Comment »

Informatica emails on session completion

Posted by doubleA on December 11, 2008

Over the past year and a half, I have been learing Informatica in order to integration data between several applications. I have learned and created a few configuration settings that I use on all of my workflows.

The first of these is email notification on all sessions. I also include a final workflow email in my more complicated workflows, but this post only deals with the session emails.

I use re-usable email tasks in all of my workflow sessions so that I receive a status email when the session is run. Some admins do not want to receive multiple emails, but I like to see the emails come through as the sessions are compeleted, and I also prefer to get the row count information to verify the correct volumes of data are being processed.

Setup Steps
First, setup a re-usable email task for success emails and another re-usable email task for failure emails.

Success Email Tas Configuration:
Email User Name – set to an admin group
Email Subject – SUCCESS – Workflow Session %s
Email Text –
Session – %s
Mapping – %m
Repo – %d

%b
%i
%c

%t

%l
%r

The email text displays the session name, mapping name, repository name (dev, tqa, prod) and then displays information on the running of the session. This includes the start and end time of the session, source row counts, target row counts, and finally total rows loaded and total rows rejected.

I also have a failure email task setup with the only difference being the subject which starts with FAILURE instead of SUCCESS.

Second, in each session, go to the Components tab and setup the On Success E-Mail and On Failure E-Mail values to use the appropriate re-usable email task.

Now, each session will send a status email upon completion.

Here is a sample of the email that is generated:

Session – s_m_MyApp_DataFeedABC
Mapping – m_MyApp_DataFeed [version 4]
Repo – REPO_PRD

Start Time: Wed Dec 10 16:41:33 2008
Elapsed time: 0:03:55 (h:m:s)
Completion Time: Wed Dec 10 16:45:28 2008

SOURCE
Rows Rows Data
Loaded Rejected Throughput Table Name
(bytes/sec)
———- ———- ———- —————-
Status
—————————————————————-
1080 0 1080 SQ_Shortcut_to_CSV_Daily
No errors encountered.
—————————————————————–
—————————————————————–
TARGET
Rows Rows Data
Loaded Rejected Throughput Table Name
(bytes/sec)
———- ———- ———- —————-
Status
——————————————————————————–
158 0 11 MyTable_Insert
No errors encountered.
485 0 33 MyTable_Update
No errors encountered.
144 0 36 MyTable_LOADERROR_Insert
No errors encountered.
293 0 17 MyTable_LOADERROR_Update
No errors encountered.

Total Rows Loaded = 1080
Total Rows Rejected = 0

Now sit back and enjoy the barage of emails 🙂

Posted in Informatica | Tagged: | 6 Comments »

A carfree week and Independence Day

Posted by doubleA on June 30, 2008

This week is going to be my first carfree week in several months. I have been slacking off on biking to the light rail station but this week I am making myself remember to bike. And it’s Independence Day this week so what better time to be independent of the car. I’m not really doing this to compensate for high gas prices, but rather to get my butt on the bike more often. I have not been getting a lot of rides in lately, and this is a quick and easy way to get to work and bike at the same time.

This week will be fairly easy because Friday is a holiday, and I’m working from home 2 days. But on the other 2 days I’m in the office, I’ll be on the bike at 6AM headed to the light rail station. Overall count for the week will be 44 miles of commute rides and hopefully a few other miles of fun rides on the days I work from home.

Posted in Bikes | 1 Comment »

Log4J 1.2.15 and maven dependencies

Posted by doubleA on May 6, 2008

Log4J 1.2.15 now depends on JMX and JMS which are not in the standard maven repos.

This post expands on this post from One Man Went to Mow… to add the jms and mail dependencies.

The following is a list of compile dependencies for this project. These dependencies are required to compile and run the application:

The Log4J site lists the dependencies, but I have also included them for quick reference.

GroupIdArtifactIdVersion
com.sun.jmxjmxri1.2.1
com.sun.jdmkjmxtools1.2.1
javax.jmsjms1.1
javax.mailmail1.4

The JMS download site is here
The JMX download site is here. Be sure to get the 1.2.1 reference implementation.

Once you have these dependencies in your local repo or shared repo, maven should be able to build a project that depends on Log4J 1.2.15.

Posted in Java Development, Maven | 3 Comments »

Why Open Source Rocks!!

Posted by doubleA on April 23, 2008

This week I have had one of those experiences that makes you love open source even more. Yes, open source is commonplace for Java developers but sometimes we forget how awesome open source can be.

This is not a post about some new whiz bang utility, a new version of an Ajax toolkit, or anything spectactular. It’s about the little things in deveoper life that allow you to be productive.

The story:
This week I have been working on some web services integration with Informatica. Informatica has a published set of WSDL files for running workflows and getting status information. I started with a new project and began with the login service. Obviously, a user has to login to the system before they can run workflows. When running the services I continually received an IndexOutOfBounds error. After digging into the CXF documentation and mailing lists, I found a variety of options, but none of them worked. At this point I was about 6-8 hours into the testing and was getting frustrated.

I then decided to post on the users mailing list which is the start of the “Rocks” part of this post.

Within 20 minutes I got a response from one of the lead developers on the toolkit. We had a couple of email exchanges over the next hour and the end result is that he identified a bug. He was going to work on the bug the next day and give me an update.

The next day, I got an email just after lunch stating that the fix is implemented and deployed in the latest Maven Snapshot.

I simply opened up my Maven pom.xml file and changed my versions to the new snapshot version. After downloading the new version, PRESTO, my code ran without errors and I was able to login to Informatica through the web services.

From the time of me posting on the mailing list to the deployed bug fix was less than 24 hours. Try that with Microsoft, IBM, Oracle, or anyone else with a proprietary product. They would take 24 hours to get to level 3 support and you would almost never email directly with the lead developers.

Posted in Java Development | 1 Comment »

Creating a new Maven archetype

Posted by doubleA on April 7, 2008

The latest version of the Maven archetype plugin allows you to create an archetype from an existing project. The alpha-3 version is required if you are creating an archetype from a multi module project. This is very important and it requires you to configure some additional repositories in order to download the alpha-3 version.

Below are the steps and issues I ran into:

Configuration
1. Edit settings.xml to add new repositories. Since archetype is a plugin and a core maven feature, you have to setup a repository and plugin repository.

Repository Configuration – add this under the repositories tag:

  1. <repository>

  2. 	<id>snapshots</id>

  3. 	<url>http://people.apache.org/repo/m2-snapshot-repository/</url>

  4. 	<!-- The releases element here is due to an issue in Maven 2.0 that will be

  5. 	fixed in future releases. This should be able to be disabled altogether. -->

  6. 	<releases>

  7. 		<updatePolicy>always</updatePolicy>

  8. 	</releases>

  9. 	<snapshots>

  10. 		<updatePolicy>always</updatePolicy>

  11. 	</snapshots>

  12. </repository>

Plugin repository Configuration – add this under the pluginRepositories tag:

  1. <pluginRepository>

  2. 	<id>snapshots</id>

  3. 	<url>http://people.apache.org/repo/m2-snapshot-repository/</url>

  4. 	<!-- The releases element here is due to an issue in Maven 2.0 that will be

  5. 	fixed in future releases. This should be able to be disabled altogether. -->

  6. 	<releases>

  7. 		<updatePolicy>always</updatePolicy>

  8. 	</releases>

  9. 	<snapshots>

  10. 		<updatePolicy>always</updatePolicy>

  11. 	</snapshots>

  12. </pluginRepository>   

See http://maven.apache.org/guides/development/guide-testing-development-plugins.html for more details on setting up snapshot repos

2. Open a command prompt and go to the root project folder of the parent project that will be used to build the archetype. Run “mvn -U archetype:create-from-project”
3. The alpha-3 version will download
4. Switch to the target/generated-sources/archetype directory
5. Run “mvn install” to put the archetype in the local repo
6. Comment out the plugin repo so that updates are not always downloaded
7. From another directory, run “mvn archetype:generate -DarchetypeCatalog=local”

Issues
I did run into several issues when creating an archetype from one of my projects. There were no major issues, but one of them does need some further research.

1. If you have ant tasks defined in your pom.xml, the archetype will create successfully, but no new projects can be created from it. The error mentions that you can’t have certain characters in the CDATA of the pom. For me, I had to completely remove the ant tasks from my project. But, I’ve been thinking of doing this anyway because I use the ant tasks to control my runtime environment settings. A better approach for this would be to use profiles, so now I kind of have to look at profiles to work around this issue.

2. I also found an issue with ### symbols in log statements and println statements. The workaround was to replace any ### chars with — and then everything worked.

3. The project that is created from the archetype has submodules named “__rootArtifactId__-web” and “__rootArtifactId__-core” instead of “MyNewApp-core” and “MyNewApp-web”. It’s only the folder name that has the wrong name. The poms all references MyNewApp-web as the module name. A folder rename solves this issue.

Posted in Maven, Uncategorized | 3 Comments »

Where are the Wicket jobs?

Posted by doubleA on March 27, 2008

I’m currently in the middle of a job search, and I’m amazed that I don’t see a single local job posting reference Wicket. There are a handful of jobs posted on Monster that reference Wicket but these are either in CA or on the East Coast. Other job boards return 0 results for Wicket. Has anyone heard of or know any companies in the Denver area that use Wicket? Or do you know of any companies that use Wicket and support a telecommuting developer?

If you are using Wicket yourself, how did you get the framework integrated into the company? Did you start with a proof of concept application to compare it to existing standards? Or did you take my approach where there were not hard standards within a fairly new J2EE organization?

In my case, I was lucky in that I was working on a new application and I didn’t have any restrictions on web frameworks. I had used Struts for several years, but got tired of the JSP hell with scriplets all over and business/display logic embedded in JSP scriptlets. That’s when I found Wicket, and I have been happy ever since. The lack of JSP and scriptlets was my main draw to Wicket, but the components and flexibility of the framework are equally impressive.

I do want to continue my Wicket love affair, but where does one go to get their Wicket fix?

Posted in Wicket | 1 Comment »

Calling stored procedures from a generic DAO

Posted by doubleA on March 24, 2008

Wow, I finally got off my ass and finished this posting. Sorry for the long delay. I could probably blame it on the holidays, a busy work schedule, or the lack of sunlight during winter. But the cold reality is I was being lazy and procrastinating. It’s hard to believe I procrastinated for 4 months, but I did. Anyway, I finally have this is a somewhat publishable format, so here goes:

With the addition of generics in Java 5, writing a custom DAO for each domain object is no longer required. There are a wide variety of articles on creating generic DAOs, but my current project uses the approach from this IBM DeveloperWorks article. This approach was choses mainly because of the clearly written article and the integration with Spring. You should be able to extend any generic DAO based on Spring to implement the stored procedure configuration.

Note: If it ain’t broke, don’t fix it. The generic DAO implementation leverages the Spring 1.2.x configuration because the code was created prior to Spring 2.0. I’m sure the configuration could be upgraded to Spring 2.0, but I didn’t see the need for it since the 1.2.x configuration worked under Spring 1.2.x and Spring 2.0.

I have taken this approach and modified it to perform 2 additonal functions. The first is the ability to execute stored procedures. The second is to execute those procedures against various databases such as Oracle and HSQLDB. See this previous post on how to setup stored procs for HSQLDB. The previous post boils down to using HSQLDB for unit testing on my dev machine, and Oracle for functional and integration testing on all other environments (dev server, test server, prod server).

The details below get quite long with all of the code snippets, but hopefully they are not too confusing.

Before getting into the details of the stored proc extension of the generic DAO, let’s first review the generic DAO code and configuration. The IBM article explains in detail how Spring and Hibernate are configured, but this is a simple synopsis of the Spring configuration. Spring AOP is utilized to intercept method calls to determine if a “finder” method is called instead of a standard CRUD method. This configuration allows the custom application DAO interfaces to define just the finder methods and no CRUD methods. The generic DAO finder implementation uses a Hibernate named query in which the named query matches the finder method name.

Initial generic DAO config

  1. <!--  DAO Layer generic config-->

  2.     <bean id="extendedFinderNamingStrategy" class="com.myapp.dao.finder.ExtendedFinderNamingStrategy" />

  3.     <bean id="finderIntroductionAdvisor" class="com.myapp.dao.finder.FinderIntroductionAdvisor"/>

  4.     

  5.     

  6.     <bean id="abstractDAOTarget" class="com.myapp.dao.DAOHibernateImpl" abstract="true">

  7.         <property name="sessionFactory">

  8.             <ref bean="sessionFactory"/>

  9.         </property>

  10.         <property name="namingStrategy">

  11.             <ref bean="extendedFinderNamingStrategy" />

  12.         </property>

  13.         <property name="spParamConfig">

  14. 		<ref bean="storedProcParamConfig" />

  15.         </property>

  16.     </bean>

  17. 
    

  18.     <bean id="abstractDAO" class="org.springframework.aop.framework.ProxyFactoryBean" abstract="true">

  19.         <property name="interceptorNames">

  20.             <list>

  21.                 <value>finderIntroductionAdvisor</value>

  22.             </list>

  23.         </property>

  24.     </bean>

  25. <!-- End DAO Layer generic config-->  

Extension #1 – Support for stored procedure calls in the generic DAO

The stored procedure extension adds another AOP advisor to intercept method calls that are for stored procedures. With the SP extension, the custom application DAO interfaces can add method definitions for stored procedures. When one of those method calls is intercepted, the generic DAOHibernateImpl class will dynamically generate a JDBC statement to call the stored procedure. The name of the stored procedure DAO method must be the stored procedure name prefixed with either SP_NRS or SP_RS depending on whether the stored procedure returns a result set.

Final Generic DAO Spring config

  1. <!--  DAO Layer generic config-->

  2.     <bean id="extendedFinderNamingStrategy" class="com.myapp.dao.finder.ExtendedFinderNamingStrategy" />

  3.     <bean id="finderIntroductionAdvisor" class="com.myapp.dao.finder.FinderIntroductionAdvisor"/>

  4.     <bean id="storedProcIntroductionAdvisor" class="com.myapp.dao.finder.StoredProcIntroductionAdvisor"/>

  5.     

  6.     <bean id="storedProcParamConfig" class="com.myapp.dao.sp.OracleStoredProcParamConfig"/>

  7.     

  8.     <bean id="abstractDAOTarget" class="com.myapp.dao.DAOHibernateImpl" abstract="true">

  9.         <property name="sessionFactory">

  10.             <ref bean="sessionFactory"/>

  11.         </property>

  12.         <property name="namingStrategy">

  13.             <ref bean="extendedFinderNamingStrategy" />

  14.         </property>

  15.         <property name="spParamConfig">

  16. 			<ref bean="storedProcParamConfig" />

  17.         </property>

  18.     </bean>

  19. 
    

  20.     <bean id="abstractDAO" class="org.springframework.aop.framework.ProxyFactoryBean" abstract="true">

  21.         <property name="interceptorNames">

  22.             <list>

  23.                 <value>finderIntroductionAdvisor</value>

  24.                 <value>storedProcIntroductionAdvisor</value>

  25.             </list>

  26.         </property>

  27.     </bean>

  28. <!-- End DAO Layer generic config-->  

Spring Advisor for Stored Procs:

public class StoredProcIntroductionAdvisor extends DefaultIntroductionAdvisor {
    public StoredProcIntroductionAdvisor() {
        super(new StoredProcIntroductionInterceptor());
    }
}

Spring Interceptor:

**
 * Connects the Spring AOP magic with the Hibernate DAO magic
 * For any method beginning with "SP_" this interceptor will use the StoredProcExecutor to call a StoredProcedure
 */
public class StoredProcIntroductionInterceptor implements IntroductionInterceptor {
    public Object invoke(MethodInvocation methodInvocation) throws Throwable {
        StoredProcExecutor executor = (StoredProcExecutor) methodInvocation.getThis();

        String methodName = methodInvocation.getMethod().getName();
        if(methodName.startsWith( "SP_NRS_" )) {
            Object[] arguments = methodInvocation.getArguments();
            return executor.executeNoReturnSetProc(methodInvocation.getMethod(), arguments);
        /*
        } else if (methodName.startsWith( "SP_RS_" )) {
        	Object[] arguments = methodInvocation.getArguments();
        	return executor.executeReturnSetProc(methodInvocation.getMethod(), arguments);
        */
        } else {
            return methodInvocation.proceed();
        }
    }

    public boolean implementsInterface(Class intf) {
        return intf.isInterface() && FinderExecutor.class.isAssignableFrom(intf);
    }
}

Stored Proc Executor Interface:

public interface StoredProcExecutor {
    /**
     * Execute a storedProc that returns a result set with the appropriate arguments
     */
    //public List executeReturnSetProc(Method method, Object[] queryArgs);

    /**
     * Execute a storedProc that DOES NOT returns a result set with the appropriate arguments
     */
    public Boolean executeNoReturnSetProc(Method method, Object[] queryArgs);
}

Extension #2 – Support for any database regardless of stored procedure support for OUT pararameters

Because Oracle supports OUT parameters, but HSQLDB does not, I added a configuration object to determine if the stored procedure call will return OUT parameters. The configuration object is used by the DAOHibernateImpl class to control how the stored procedure is called.

SP Config interface:

public interface StoredProcParamConfig {
     public boolean allowOutParameters();
}

Oracle implementation which supports out parameters:

public class OracleStoredProcParamConfig implements StoredProcParamConfig {
     public boolean allowOutParameters() {
          return true;
     }
}

HSQLDB implementation that does not support out parameters:

public class HSQLStoredProcParamConfig implements StoredProcParamConfig {
     public boolean allowOutParameters() {
          return false;
     }
}

Generic DAO code implementation of the extension

Finally, we get to the meat of the code that does the stored procedure call. The stored procedure code is simply a few additonal methods added to DAOHibernateImpl class and changing DAOHibernateImpl to implement StoredProcExecutor.

// first, setup the StoredProcParamConfig var that can be injected.
private StoredProcParamConfig spParamConfig;	

public StoredProcParamConfig getSpParamConfig() {
	return spParamConfig;
}

public void setSpParamConfig(StoredProcParamConfig spParamConfig) {
	this.spParamConfig = spParamConfig;
}

// StoredProcExecutor methods
    /**
     * Execute a storedProc that returns a result set.  If the current
* spParamConfig object does not support return values, null is
* returned. */ @SuppressWarnings("unchecked") public List executeProcReturnSet(Method method, Object[] queryArgs) { /* for (int i=0;i<queryArgs.length;i++) { System.out.println("%%%%% SP ARGS = " + queryArgs[i]); } */ // set the return objects to be if (getSpParamConfig().allowOutParameters()) { final Query namedQuery = prepareQuery(method, queryArgs); return (List) namedQuery.list(); } else { return null; } } /** * Execute a storedProc that DOES NOT return a result set */ //TODO could probably use a namedQuery to call the proc and not have to do all of the special code public Boolean executeProcNoReturnSet(Method method, Object[] queryArgs) { Connection conn = null; CallableStatement proc = null; StringBuffer spCall = new StringBuffer(); try { /** * The connection is not closed at the end of the method, b/c the method * relies on the Hibernate session to be closed by Spring or the OpenSessionInView filter */ conn = getHibernateTemplate().getSessionFactory().getCurrentSession().connection(); String spName = method.getName().substring( DAOConstants.PREFIX_SP_NRS.length()); //LogManager.debug(this.getClass(), "***** SPName = " + spName); // build the call statement with the correct number of params and sp name spCall.append("{ call " + spName + "("); for (int i=0; i<queryArgs.length; i++) { spCall.append("?"); // add a comma if there are more params // have to subtract 1 since arrays index from 0 if (i < (queryArgs.length-1) ) { spCall.append(", "); } } spCall.append(") }"); LogManager.debug(this.getClass(), "***** executeNoReturnSetProc - spCall=" + spCall); proc = conn.prepareCall(spCall.toString()); LogManager.debug(this.getClass(), "***** executeNoReturnSetProc - prepared call"); //LogManager.debug(this.getClass(), "***** SP callableStatement = " + proc); // add the params to the call for (int i=0; i<queryArgs.length; i++) { setParam(proc, i+1, queryArgs[i]); } proc.execute(); LogManager.debug(this.getClass(), "***** executeNoReturnSetProc - returned spCall=" + spCall); return Boolean.TRUE; } catch (SQLException e) { LogManager.debug(this.getClass(), "***** executeNoReturnSetProc ERROR = " + e.getMessage() + " with spCall=" + spCall);; e.printStackTrace(); return Boolean.FALSE; } finally { try { if (proc != null) { proc.close(); } } catch (SQLException e) {} } } private void setParam(CallableStatement cs, int pos, Object o) throws SQLException { if (o == null) { cs.setNull(pos, java.sql.Types.NULL); if (spParamConfig.allowOutParameters()) { cs.registerOutParameter(pos, java.sql.Types.NULL); } } else if (o instanceof String) { cs.setString(pos, (String) o); if (spParamConfig.allowOutParameters()) { cs.registerOutParameter(pos, java.sql.Types.VARCHAR); } } else if (o instanceof Short) { cs.setShort(pos, ((Short) o).shortValue()); if (spParamConfig.allowOutParameters()) { cs.registerOutParameter(pos, java.sql.Types.INTEGER); } } else if (o instanceof Integer) { cs.setInt(pos, ((Integer) o).intValue()); if (spParamConfig.allowOutParameters()) { cs.registerOutParameter(pos, java.sql.Types.SMALLINT); } } else if (o instanceof Double) { cs.setDouble(pos, ((Double) o).doubleValue()); if (spParamConfig.allowOutParameters()) { cs.registerOutParameter(pos, java.sql.Types.DECIMAL); } } else if (o instanceof Float) { cs.setFloat(pos, ((Float) o).floatValue()); if (spParamConfig.allowOutParameters()) { cs.registerOutParameter(pos, java.sql.Types.FLOAT); } } else if (o instanceof Long) { cs.setLong(pos, ((Long) o).longValue()); if (spParamConfig.allowOutParameters()) { cs.registerOutParameter(pos, java.sql.Types.BIGINT); } } else if (o instanceof java.sql.Date) { cs.setDate(pos, (java.sql.Date) o); if (spParamConfig.allowOutParameters()) { cs.registerOutParameter(pos, java.sql.Types.DATE); } } else if (o instanceof java.util.Date) { cs.setDate(pos, new java.sql.Date(((java.util.Date) o).getTime())); if (spParamConfig.allowOutParameters()) { cs.registerOutParameter(pos, java.sql.Types.DATE); } } else if (o instanceof byte[]) { cs.setBytes(pos,(byte[]) o); if (spParamConfig.allowOutParameters()) { cs.registerOutParameter(pos, java.sql.Types.BIT); } } else { cs.setObject(pos, o); if (spParamConfig.allowOutParameters()) { cs.registerOutParameter(pos, java.sql.Types.JAVA_OBJECT); } } } // END StoredProcExecutor methods

You’ll notice that the executeProcReturnSet method is much shorter and cleaner than the NoReturnSet method. There are two reasons for this. The first is that I currently don’t have any procedures in my application that return a resultSet. In the intitial design there was, but those were removed once we got into the implementation of the stored procedures. So, that ends up with a bit of code that I no longer use and have not fully integration tested. The calls do work in HSQLDB because I have unit tests to make sure the method is called, but I have not done any extensive testing and verification with an Oracle procedure that returns data.

The NoReturnSet method is quite long because I am doing direct JDBC code rather than calling a Hibernate namedQuery. There is no technical reason why a namedQuery could not be used, but I have not had a chance to work on this change. This code was implemented within some tight time constraints and I adhered to my mantra of “if it ain’t broke and you ain’t got time, don’t fix it”. A couple hours of coding and testing could probably reduce this mess of code to less than 10 statements.

Integrating into a build system

In my build system, I have extracted out the DAO Spring configuration into a separate file in order to configure the DAO layer based on the environment. For dev, I use the HSQLDB SPConfig object, but for all others I use the Oracle SPConfig object. I use Maven for my builds, and I have a file copy step that uses the proper DAO configuration based on the -Dserver.name=XXX value supplied when Maven is run. If you are a Maven user, profiles could also be used to determine which SPConfig object is used. Or if you only use one database in all environments, you can hardcode that SPConfig object into your Spring configuration.

Posted in Java Development | 3 Comments »

Finally some Colorado fall weather

Posted by doubleA on November 21, 2007

We finally have some real fall-like weather today. Highs in the mid 20s with 3-5 inches of snow. It’s been unseasonably warm this fall, with mid 70s earlier this week.

It will be a white Thanskgiving!!

Yahoo Weather conditions as of 7AM this morning.
Light Snow
Currently: 22°
Feels Like: 14°
Barometer: 30.22 in and rising
Humidity: 90%
Visibility: 1.75 mi
Dewpoint: 20°
Wind: NNE 6 mph
Sunrise: 6:49 am
Sunset: 4:41 pm
High: 26° Low: 10°

Current Yahoo Weather

Posted in Uncategorized | 1 Comment »

Stored procedures in HSQLDB

Posted by doubleA on October 30, 2007

Today, I had the fun of configuring HSQLDB to run stored procs. The entire process was not that bad, but there didn’t seem to be a simple tutorial for how to accomplish this task. Here is my attempt at that tutorial.

A Little Background:
My current app uses a stack of Wicket, Spring, and Hibernate with HSQLDB as the database for development and Oracle for test/prod deployments. Up to this point, we have used Hibernate to generate all of the SQL for acessing and modifying data.

One of our features is the ability to generate a report which cacluates YTD totals and averages. The generation processes several hundred thousand records so we chose not to implement it in Java/Hibernate. Instead we used stored procedures in Oracle. The generation of the report is initiated from a page within Wicket which means that my web interface must call into Spring and then into Hibernate to call the stored procedure.

For this scenario, I didn’t need to write full stored procedures in HSQLDB since HSQLDB is only used for my dev and unit testing. However, I did need HSQLDB to have stub procs to call so that I could pass all of my unit tests.

The Solution:
First, HSQLDB has to be configured to load the stored procs. Since I use Hibernate, I have an import.sql file that preps my database with a bunch of inserts.

From the hibernate website:
Also note that since Hibernate 3.1 you can include a file called “import.sql” in the runtime classpath of Hibernate. At the time of schema export it will execute the SQL statements contained in that file after the schema has been exported.

To the import.sql file, I can add my stored proc definition. Since HSQLDB is written in Java, it uses Java classes for its stored procs. Any Java method can be configured as a stored procedure such as Math.sqrt as shown in the HSQLDB FAQ.

Given this info, I created a new class named HSQLDummyStoredProcs. Inside of this class I have the following methods defined:

  1. public class HSQLDummyStoredProcs {

  2. 	public static void proc_Int(Integer i1) {

  3. 	}

  4. 	

  5. 	public static void proc_IntInt(Integer i1, Integer i2) {

  6. 	}

  7. 	

  8. 	public static void proc_IntDate(Integer i1, Date d1) {

  9. 	}

  10. 	

  11. 	public static void proc_LongDate(Long l1, Date d1) {

  12. 	}

  13. }

This class simply defines blank methods with various combinations of parameter types. Inside of my import.sql file, I can alias these methods to a stored procedure name such as :


CREATE ALIAS load_ReportItem FOR "com.mycompany.dao.sp.HSQLDummyStoredProcs.proc_IntDate";

This aliases the load_ReportItem stored procedure with the proc_IntDate method in my dummy proc class.

Now, anytime my code calls the load_ReportItem stored procedure, HSQLDB will execute my proc_IntDate method with the given parameters. I don’t have to worry about the functionality of the stored procs since I’m only trying to verify the stored procedure is called during my unit testing. I can use the same Spring/Hibernate code for both HSQLDB and Oracle. The testing and verification of the stored procedures are done by the Oracle developer.

Note:
If you are using HSQLDB as my primary database, you probably don’t have to go through the ALIAS step since you could call the stored procedure by the fully qualified Java method name. However, I still think the ALIAS is a good idea because it gives you one level of indirection between the DB names and the Java names.

Note 2:
HSQLDB does not support OUT parameters in a stored procedures. I was able to work around this issue by simply not setting any OUT parameters if HSQLDB was my database. The details of this workaround will be in the next post, but the basic idea is to let Spring inject an HSQLDB class that doesn’t do OUT parameters when I’m in dev, and inject an Oracle class that does set OUT parameters for other environments

In the next post, I’ll detail how I plugged the stored procedure code into my generic DAO framework and issues I discovered with having generic code work for HSQLDB and Oracle.

Posted in Java Development | 8 Comments »