9 days to go to EMCW: why is case management so cool?

April 26th, 2015 1 comment

I was just trying to finish my last tasks before World. One important task is finishing a mock-up for a client. A mock-up for their new xCP 2.1 application. Doing this I more and more wonder about this new concept CaseManagement. Is this just an other buzz word or does it realy introduce a new paradigm in ECM?

The basic functional difference is the ability to create an object with relations, transactions (sorry have to say stateless processes, very good name……?) and have structured datasets (contentless objects, another perfect name to explain to an end user….?). All nice technical functions that I really like but does it make that much difference to the end user? Is this the ECM chicken with the golden egg or just some extra nice modern features?

Where it gets interesting is looking at the challenges of a case. Simply put, a case is a large amount of related information about a set of tasks that have to be performed or a goal that has to be reached. The challenge for every user within a case management solution is the overview. How do I see within one glimpse what the case is about and what am I supposed to do with it? With all those great technical options, relations, transactions etc.. it is so difficult to see the tree within the forest (nice Dutch expression, but I think everybody would understand this one instantly).

So why is Case Management a gamechanger and what is the basic necessity for a good Case Management system? In the end it is easy, maybe not for the average Documentum consultant, but the answer is: USER EXPERIENCE. Simple but so true. To be able to give an end user a system that is workable, it needs to have such a perfect user design that within seconds a caseworker knows what it all is about.

A normal Documentum consultant who was used to work with WebTop did not really know or want to know what User Design was, for the simple reason that designing any good user experience in WebTop was a challenge or should I say impossible.

But now we have xCP. This gives the designer a real flexible tool to fully design the interface and give the person the right display of information to work very efficiently and like what they are doing. The interaction, or should I use an other buzz word, the agile approach, you can have with the usergroup before you start to create a technical solution, but simply create a mock-up is baffling. Users cannot wait to get the system, workshops are actually fun to do and the results using tools like Axure are super. So far my thoughts about the new xCP and tomorrow some more detail about the options of a good design. (As far as this simple Documentum consultant understands it).

EMC Elect is getting ready for EMCW-Momentum 2015

April 25th, 2015 Comments off

It is less than 10 days until I fly to Vegas again. Ready for a week of knowledge sharing and showing the world the wonders of SPA4D and LoBConnect. It will be different from the previous years. A number of reasons:

1) We have two very cool products. We see how happy our customers are already reacting to them, but are excited how the Momentum crowd will respond.

2) I cannot wait to finally see a glimpse of all the new things of ECD: What will the new Capital Projects UI look like? Is the 3th gen platform getting the traction it deserves? Is LSQM ready for the next step and will the multi tenant solution for Life Science be a good competitor for Veeva? What about xCP and a rules engine? D2 and xCP, you see they are more and more growing to each other, but when will they become one, now or never or anything in between?

3) Being EMC Elect. I don’t know if anything will be different, but I feel obligated to write a blog per day to give my ideas and thoughts.

Please drop by our booth and share your thoughts. I’m always in for a good discussion.

What you tell me in Vegas, might end up in my blog 🙂

Documentum Dump and Load limitations

March 11th, 2015 Comments off

Lately I’ve been involved in a project where we used Documentum’s dump/load feature to copy a lot of documents from one repository to another. We successfully copied millions of documents, folders and other objects, but this success did not come easy. In this blog I would like to share some of the issues we had for the benefit of others using dump and load.

A standard tool

Dump and load is a tool that can be used to extract a set of objects from a Documentum repository into a dump file and load them into a different repository. Dump and load is part of the Documentum Content Server. This means it can be used with any Documentum repository in the world. The tool is documented in the Documentum Content Server Administration and Configuration Guide (find it here on the EMC Support site). The admin guide describes the basic operation of dump and load, but does not discuss its limitations. There is also a good Blue Fish article about dump and load that provides a bit more background.

A fragile tool

Dump and load only works under certain circumstances. Most importantly, the repository must be 100% consistent, or a dump will most likely fail. So my first tip: always run dm_clean, dm_consistencychecker and dm_stateofdocbase jobs before dumping and fix any inconsistencies found.

Dump Limitations

The dump tool has limitations. Dump can be instructed to dump a set of objects using a DQL query. The dump tool will run the query and dump all selected objects. It will also dump all objects that the selected objects reference. That includes the objects ACLs, folders, users, groups, formats, object types, etc.etc. This is done in an effort to guarantee that the configuration in the target repository will be ok for the objects to land. This feature causes a lot of trouble, especially when the target repository has already been configured with all the needed object types, formats, etc. It causes a 100 object dump to grow into a dump of thousands of objects, slowing the dump and load process. Worse, the dump tool will dump any objects that are referenced from the original objects by object ID. This causes the folder structure for the selected documents to be included as well as the content objects, but it can also cause other documents to be included, including everything that these documents reference (it it s recursive process). This method can backfire, for instance if you select audit trail objects for instance, all objects that they reference will be included in the dump.
Now this would not have been so bad if the dump tool had not had size limitations, but it does. We found for instance that it is impossible to dump a folder that has more than 20.000 objects in it (though your mileage may vary). The dump tool just fails at some point in the process. We discussed it with EMC Support and their response was that the tool has limitations that you need to live with.
As another example we came across a repository where a certain group had many supergroups. This group was a member of more than 10.000 other groups. This was also too much for the dump tool. Since this group was given permissions in most ACLs, it became impossible to do any dumps in that repository. In the end we created a preparation script that removed this group from the other groups and a post-dump script to restore the group relations.

Load Limitations

The load tool has its own limitations. Most importantly we found that the bigger the dump file, the slower the load. This means that a dump file with 200.000 objects will not load in twice the time it takes to load 100.000 objects, it will take longer. We found that in our client’s environment we really needed to keep the total object count of the dumps well below 1 million, or the load time would go from hours to days. We learned the hard way when we had a load fail after 30 hours and we needed to revert it and retry.
Secondly, objects may be included in multiple dump files, for instance when there are inter-document relations. For objects like folders and types this is fine, the load tool will see that the object already exists and skip it. Unfortunately this works differently for documents. If a document is present in 3 dump files, the target folder will hold 3 identical documents after they have been loaded. Since you have no control over what is included in a dump file and you cannot load partial dump files, there is little you can do to prevent these duplications. We’ve had to create de-duplication scripts to resolve this for our client. We also found that having duplicates can mean that the target docbase can have more documents than the source and that the file storage location or database can run out of space. So for our production migration we temporarily increased the storage space to prevent problems.
Another limitation concerns restarting of loads. When a load stops half way through, it can be restarted. However we have not seen any load finish successfully after a restart in our project. Instead it is better to revert a partial load and start it all over. Revert is much quicker than loading.
Finally we found that after loading, some meta data of the objects in the target repository was not as expected. For instance some fields containing object IDs still had IDs of the source repository in them and some had NULL IDs where there should have been a value. Again we wrote scripts to deal with this.

As a final advice I would encourage you to run all the regular consistency and cleaning jobs after finishing the loading process. This includes dm_consistencychecker, dm_clean, dm_filescan, dm_logpurge etc. This will clean up any stuff left behind by deleting duplicate documents and will ensure that the docbase is in a healthy state before it goes back into regular use.

As you may guess from this post, we had an exiting time in this project. There was a tight deadline, we had to work long hours, but we had a successful migration and I am proud of everyone involved.

If you want to know more, or want to share your own experience with dump and load, feel free to leave a comment or send me an email (info@informedconsulting.nl) or tweet (@SanderHendriks).

 

Social Welfare at the ECM Workplace

January 19th, 2015 Comments off

A few months ago, Linda gave birth to her son Luca. Linda is the wife of Stephan, a colleague of mine. Curious as he is, Luca was premature when he decided that it was time to see the light of day. That by itself wasn’t any problem at all. The world was ready for him.

The birth of Luca triggered me to share a story that I tell my customers in the early days of a document management project. By now you are wondering why the birth of Luca trigger this story.

Here in the Netherlands, we have a social welfare system in place that kicks in at the early days of a pregnancy. Not only is the health of both mother and her child monitored, but the system also ensures a safe home is in place for the new born. It may sound overanxious, but one of the checks they do is to see if you have a cradle for the baby. That same social welfare system functions as a lifeline throughout your entire life until you shall come to your grave in ripe old age.

That lifeline provides the guidance, the procedures, the policies and the actions to fall back upon during your life. It’s the blueprint of the minimal life. You can still live your live to the max the way you want it, as long as you don’t underperform and drop below the minimum that the lifeline provides. It also takes into account the stages that you pass in your life. You may become a parent yourself, which gives you access to child support. You may develop a partial disability to work, which provides access to special compensation benefits. And even a basic pension is provided when you reach the age of 65+.

For us humans, the Social Welfare system provides the lower limit blueprint of our life from Cradle to Grave.

If you’ve read my previous post (Diversity at the ECM Workplace) about Connecting People to the Enterprise, you will understand that bringing and keeping your users on board requires an ECM solution that is easy to use but still honours the enterprise needs. One aspect that you need to facilitate is what I call the Social Welfare for the ECM Workplace.

Cradle to Grave is the concept that implements core information management functions, which become a lifeline throughout the entire life of your documents.

If I create a new document, the system needs to be ready for that. It needs to support the cradle. This can be done if the lifeline supports me with e.g. content types, templates, managed metadata and rule-based storage. In these early days in the life of the document, it needs the lifeline to understand whether it is going to be a contract based on the English language template. We stick more labels on the document to classify it and together that allows a document management solution to decide where the cradle should be located.

That lifeline also provides the guidance, the procedures, the policies and the actions to fall back upon during the life of the document. It will pass stages depending on the life it lives. In the infant stages you’ll see typical states like draft, and for review. In the adolescent stage the document will go up for approval, and get approved. While the document matures, it can use the supporting processes to move between these states and stages. At some point in time it might become a reference document to others which alters the access permissions as well as its information classification. Some documents will move from classified to unclassified, from internal use only to publicly available.

Like all of us, there comes a time when also the document will retire. It will be withdrawn from active service but is still available in some archived format with again changed access permissions and information classification. It may also move into a new storage location.

For managed information, laws, rules and regulations determine the length of the pension. There is no fixed rule for this, just like nobody knows how many years one is given to enjoy the old age. The harsh reality is, that it won’t last for ever. For managed information the grave implies that the information is deleted from the ECM solution or moved from the system to preserve its historical value elsewhere.

Depending on your requirements and circumstances, you determine what that lower limit is and which ‘social benefits’ you provide your users.
For managed information, Social Welfare for the ECM Workplace provides the lower limit blueprint of the life of that information from Cradle to Grave.

So, why did the birth of Luca trigger this? Because of the parallel between the Dutch Social Welfare System and the Cradle to Grave. You don’t want a fixed path for your newly born and nor should it be a one-off approach for your documents if you want to keep your users connected with your enterprise needs. But the opposite is also true. You don’t want uncontrolled chaos in both situations. It should be predictable and acknowledging that new documents get created and deleted and need to be managed in between. From Cradle to Grave.

Like the concepts of Diversity and Cradle to Grave matches perfectly in real life, as do they match perfectly in our ECM world. Take a look at SPA4D.com if you want to learn more about how we can help connect SharePoint collaboration functionality to the enterprise control of Documentum. Or watch our blog for more articles on enterprise information management.

Diversity at the ECM Workplace

November 10th, 2014 Comments off

Just the other day I was driving home from the office reflecting back at events that happened in the last few days and weeks. As always driving home is one of those precious moments where I can sit back and reflect. Sitting in the car in traffic, it finally dawned to me.

For a couple of days already I was trying to put finger on something that bothered me. I had been working on multiple engagements over the last few weeks. Some only related to EMC Documentum. Some only to Microsoft SharePoint and some included both. All were in different industries. If you wouldn’t know better, there was nothing they had in common. But there was.
Read more…

Configuration vs Customization vs Development

June 8th, 2014 Comments off

Everyday more customers are hesitant about development and question if custom work against SharePoint is a good idea. Often they lean towards the opinion that only out-of-the-box solutions are allowed to be created. They believe this will pose less problems when upgrading to a next version or they just have an overall no-code policy.
This poses the question of what exactly is out-of-the-box. There have been jokes that a SharePoint installation is only out-of-the-box as long as you only add content through the browser without changing anything else. There is a reason for those jokes. Out-of-the-box does not always mean there will be no problems during upgrade and code does not always mean there will be. That is why we have been thinking about what is customization, what is configuration and what is development and why we have created a vision about which should be used and when. Read more…

Categories: SharePoint Tags:

Changes to the ‘SuiteBarBrandingElementHtml’ property not reflected

June 3rd, 2014 Comments off

In SharePoint 2013 you can run into this issue: The top left corner of pages does not reflect changes to the ‘SuiteBarBrandingElementHtml’ Web Application property or is empty instead of showing default ‘SharePoint’ text.

In this article I’ll explain why this can be the case and how you can resolve or prevent it.
Read more…

EMC World-Momentum 2014 – The final view

May 15th, 2014 Comments off

Sitting in a plane going back home after four long days of bathing in the Documentum information, discussion and fun I’m ready to make up a final view of the vision of the IIG team the challenges and opportunitees I see and whatever more was there.

First of, I have to thank the EMC and IIG marketing teams for organizing a very good venue. Overall all went more than smooze and only getting in and out of ballroom A with 12.000 people demanded a little patience. The party at TAO and the Imaging Dragons are not easily forgotten.

The fact that (almost) all sessions for us Documentum geeks were on one level, the perfectly located Momentum lounge in the middle and the seperated area in the solution pavilon made it almost as the Core freaks were not there :-).

Now about the content. Read more…

Writing Bootstrap applications for Documentum 7

May 14th, 2014 2 comments

We all know that the modern user wants to use our applications on any device, from anywhere in the world. This used to be a real challenge for applications using EMC Documentum. The out of the box applications such as WebTop, Administrator and TaskSpace where all designed to be used on a PC or laptop with a big screen. If you try to use those on your smartphone, the usability is going down the drain. And I’m not even mentioning that is uses a browser plugin for file transfers.

With the advent of Documentum 7 and xCP 2, things are looking better. The new xCP application UI works in all major browsers without using plugins. It also scales better to fit on smaller size screens.
However, there is still a way to go before the xCP UI can be called Responsive.
Read more…

EMC-Momentum 2014- the first day

May 6th, 2014 Comments off

The first day at EMC WORLD is always a challenge. It is great to see old and new friends and feel the great fibe at Momentum.
But a large part of the day is arround the general EMC session and listening to people talking about peta bytes ( sorry exabyte) and iops is not my thing so it gives me some time to work on this blog.

We started with a perfect session of Arnaud Viltart and the new PM Nick King about xCP2.1. And the list of new functionalities in 2.1 are extreme. As Arnaud pointed it out. With this version we made the xCP solution mature. It is totally not a minor release it is the mature version of xCP2. And as the 2.0 version was a hugh step forward, these new features make it (almost) a perfect case management solution. Almost because I think we still are missing some sort of learning business rules engine that will support the users with suggestions, options, tips and even take some automated descisions.
Read more…