Archive

Archive for the ‘Documentum’ Category

#MMTM16: Case management: The good, the bad and the ugly

April 27th, 2016 Comments off

People who have read my previous blogs know I have a soft spot for xCP and case management according to Documentum. The past months I have wondered why this is.

The first and easy answer is the fact that probably more that 70% of the solutions I personally have implemented for my clients, where some sort of case management implementation. The next question I asked myself was: What do you mean: case management? It was all document management! What makes it different from the other solutions? Thinking more and more about that question made it clearer where my soft spot comes from. IMG_0797_small

People who work with me will confirm: I am a bit more than a little chaotic. To be able to function it is mandatory for me, to have an easy high level overview of all the stuff that is hanging somewhere on my to-do list. Case management is (in my eyes) exactly that functionality. Give every knowledge worker their own dashboard, with what is important for them at that point in time. And what I see in my day to day encounters with end users is that demand, that thirst, for that overview from every knowledge worker.

IMG_0820_smallIt is never about one document only. It is always about a group of information items that have, at this specific point in time, a relation with each other and this relation has a certain current status, that makes it important to show to me (a knowledge worker) right now. I need to be able to drill down to the specific pieces of information and change all pieces at once, separate or in combinations. This has to do with the fact that most (but not all) knowledge workers don’t work on one large document, but on a lot of separate pieces of information that need to be handled quickly. When you are part of the quality authors of a life sciences company or managing all assets of a large power-plant (my other 30% of the solutions I implemented), having a real document centric management system, that has the focus on that specific document and its related documents, is very important and demands an EDMS like D2. All the rest of us need xCP.

In my experience, all other types of document management systems will have way more to do with case management than with real, pure document centric management: Working on pieces of information that have some sort of relation with each other during their existence.

Case management is all about a good dashboard that shows you the right information for the task/action you want to perform.

Hand drawing Content flow chart on transparent wipe board.

With TaskSpace, Documentum took the first leap into the real case management world and showed that it is way better to have a Case Management solution that is built on the foundation of an ECM system than a Case Management system solely based on a relation database. This first go at a case management solution was good, but lacked a good and consistent developing environment and a flexible and very user friendly interface. (Beside some annoying bugs in the core)

And then came xCP2. The idea is so simple, but so great that I really jumped with excitement when I found this. This is really the vision we all were hoping for from EMC-ECD (IIG at that time). This is the good in my story. The product team, who came up with this approach, should be decorated :-). Sure, the 2.0 version was far from perfect, it had too much issues and lacked some functionality to make easy deployment and testing possible, but it was clear that this is the direction that Case Management needs to go.

With the new version coming up, and the change in deployment strategy, ECD is taking the right approach to make this a very stable and easy to implement system.

IMG_0821_smallBut there is still a bad. This has to do with the fundament of information classification and the way Documentum is structured. It is easy to create a great app in xCP Designer and to make the perfect dashboard and underlying supporting pages. It is relative easy to make a workable deployment strategy to deploy new features and solve bugs without too much interference for end users. But once in production with the number of cases growing rapidly, that great dashboard becomes slow, slower, the slowest… At first, you have happy end users, who love the possibilities of designing their interface together, and the flexibility and modern look and feel you can give them. Suddenly, after a couple of months their comments are a bit more cold and distant. In the end, the solution is still good and they are happy, but you feel that the performance of their first and main screen is getting annoying.

So my hope when it comes to xCP and Momentum 2016 is that the new product team of xCP has put a lot of thought and effort in the performance improvement of the historical queries in xCP. Challenge Jeroen van Rotterdam and his baby xPlore (xDB) to make those queries super-fast. A whole xCP solution is as good as the main dashboard!!

Digital TransformationAnd then came the ugly. Don’t be alarmed ECD, this time it is nothing you need to change :-). The ugly is all about the fight between the top 5 big IT companies who like to annoy each other by downgrading the support for the others fundament. It started with Apple who did not like Microsoft Silverlight or Adobe’s Flash. The arrogance to just not support it, shows how big their ego is. But that was only the beginning. Now a lot of browser-companies don’t like Oracle and push to the limits to make the use of Java in your back-end web-application difficult or even impossible (Chrome). And last but not least Microsoft, who is doing so great in trying to be friends with everybody now that Nadella is behind the steering-wheel, still needs to show the strength to the others. JavaScript is the most common used front-end languages to create a dynamic webpage. It is the new standard for web development and the only easy way to fulfil the UX demands of the new user. But why should that be of any concern to Microsoft or Mozilla? Sure it is easy to shout about security issues and all, but in the end it is just budget that makes it not possible to make JavaScript run very fast. To see the difference in performance of an xCP application between IE10 and Firefox and Chrome is frightening. Even the new Microsoft Edge is still lacking compared to the others and we see no improvement in JavaScript speed in the new versions of Firefox. So the ugly is only something we can hope will improve but is for sure a challenge we consultants need to be aware of when implementing the next great Case Management solution in xCP.

Mmtm16. Where did the disruption go?

April 11th, 2016 Comments off

It has been almost 12 month since Rohit spoke the heavy weighting words: We need to disrupt the ECM Space. Change is needed and there needs to be an alternative for the 2de platform. A new direction for ECM!

In the first blog leading to Momentum 2016 it is a good time to reflect what has happened since that bold statement and some reflection from the years before and how Rohit got to that statement. Momentum16 will be my 21st Momentum to get to the vision I will express below and since I got in contact with Documentum in 1995 a lot has happened and I am all for a new direction or a new step in maturity of ECM, but is it that easy/doable?

P1

In 1982 when Howard Shao and his team came up with Documentum and the object-related model with a very extensive and flexible security model, it was new and changed the world of ECM. It is impressive to see that the dm_sysobject and the dm_acl are still the fundament of Documentum. But a concept of 1982? Is that still actual and in line with the ‘new normal’ of this digital age? It is good to look back and see what has happened with Documentum and why and try to make some conclusions about the best steps (according to me).

When I startedP2 with Documentum we were in the top of the client/server age. Documentum had their super client WorkSpace. A heavy duty client application with for that time very flexible interface with a lot of functionality and a more than acceptable performance. In those times performance was the main pain. The hardware and database capacity made all ECM systems slow and the immaturity of the platform made it often a challenge to get them ready for production.

In 1998 my first Momentum we were all amazed by the new concept of the browser and Documentum came with their version of an application server with a full interface in it. Whitney Tidmarsh gave a super in-depth session about the new three-tier model and RightSite and we all knew we would win the world. Documentum was ahead of the competition but maybe a bit too fast and the performance and stability of the whole stack was a challenge. Still making a solution with Documentum was so much easier that their competitors like FileNet or Open Image (Wang). Why was that? It is simple: the base was so strong and consistent that it you really could focus on the other challenges.P3

And the world changed and open source showed its face and the web became more flexible. Rightsite was becoming out dated. Documentum invented there 7 layer configuration model for the web: WDK. The idea might have been good but maintaining any changes was tricky. WebTop is still used a lot, everybody complains about the outdated interface but I have seen a lot of great implementation that really gave great ECM support to companies across the world. And why is that? I think the answer is still simple: the base is so good: dm_sysobject and dc_acl.

And now iP7n the new normal with IoT where the world demands user friendly and flexible IT, Documentum comes with D2 and xCP2 with interfaces that meet the demand for UX, flexibility and maintainability. With now the front end in control and mature we see that implementing a good and solid Documentum solution is easy if you know how to combine the perfect foundation with the flexible interface options. It seems that we are there and we can take over the world again.

 

P4

But simultaneous with a great UX everybody demands the cloud and more precise the public cloud. Jeroen van Rotterdam was very right in his statement that Documentum can do a lot but it is not a multi-tenant environment that fulfills all demands for tenant separation and control. So EMC-ECD needs to come with a new platform with new demands and possibilities. So project NextGen Server was started and somewhere last year it would change to Project Horizon. What I expected of this was that it would be so very different and new and all that great stuff but that something would not change: the base is so good: dm_sysobject and dc_acl.

11 month after the announcement of Rohit I have to say: I don’t know. I have seen a number of demo’s/video’s of Snap, Exchange, Assent, Jazz and Shelf but that is all, no release date, no playgrounds for partners etc.. So the conclusion for now is simple: Did EMC disrupt the ECM space? Not in 2015 and the most important announcement we want for MMTM16 will be about the progress and availability of the disruption: Project Horizon or what the new name is going to be…….P5

What have I seen sofar: dm_sysobject and dm_acl are gone…. There might be a building block or two that might give you some sort of basic object model but for the rest it is all XML so you are free to make a mess out of it. I’m worried that this will mean that we will not reuse the power of Documentum in its new generation and I think that will be a missed opportunity.

What is interesting to see is that in the other big win from EMC-ECD: InfoArchive they started off only with xDB (The XML-database) to archive all stuff, but before the solution came to its full potential more control and security was needed and in the end the conclusion was: We need a strong security model and the ability to define clear objects and object-structure  as powerful and flexible as in Documentum, so we just added the content server from Documentum to the mix and suddenly InfoArchive is very secure and strcutured. Why, you can guess, Documentum has it perfect dm_sysobject and dm_acl.

P6So what do I expect to hear at @MMTM16 when it comes to the public cloud? A lot about the new name for Project Horizon and a lot about the perfect new app’s that EMC-ECD has created on the platform, but hopefully also something about the perfect fundament that demands structure and control in your object configuration and security that mimics a lot like: dm_sysobject and dm_acl!! And last but not least the way we partners of EMC-ECD can reuse this potential disruption of ECM, because the only way EMC-ECD is capable of disrupting the ECM space is by allowing partners like Informed Consulting to build the prefect vertical apps that will rock the world.

What’s up next? In my next blog I’ll try to reflect my thought about Documentum xCP3.0 (or 2.3??) and what is the good, the bad and the ugly is in the new IoS case management.

 

A Case of Component Based Authoring

September 30th, 2015 Comments off

Component Based AuthoringYesterday afternoon I attended an EMC webinar about their Next Generation solutions for Life Science, when a slide passed by about Component Based Authoring. It was a different way of expressing the same subject Jeroen van Rotterdam addressed recently in his EMC Spark blog called ‘Who is using Word?‘ From that blog, comes this quote:

Then there is the trend towards targeted point solutions with very domain-specific capabilities to create these smaller chunks of content. A generic word processor is far from efficient in this scenario, and even harder to customize with the desired user experience. Content creation applications are so much more powerful in a business context and becoming less focused on text.

It’s fun to read about a trend – in this case Component Based Authoring – when you’re already practising this approach. It feels for me as if this is the only way forward in case based solutions being delivered today.

My current project is implementing an EMC xCP based solution to support a decision making process where each decision is backed by carefully build cases.

In its previous implementation, documents were the content containers. A lot of copying and rewriting was taking place. A cumbersome and error prone way of working. We didn’t investigate it, but if I were to place a bet, I would say that it’s almost a guarantee that each document is formatted uniquely and it’s highly likely that not every document contains the mandatory information. The flip-side of the coin is, that this freedom is very well received by the end-user who is using Microsoft Word, a tool perceived as very user friendly and productive (don’t get me started…), to let his creativity flow.
You could argue that the needs of the end-user are prevailing over those of the enterprise. At Informed Consulting we believe that connecting people and the enterprise should be a win-win situation and is key to success.

With the new xCP solution we’re applying Component Based Authoring and Word is now only needed for the supporting documents. Not for the key information of the case. That key information is divided into logical components and authored independently. With this approach we created a balance between both user and enterprise needs. But in order to achieve this, more is needed than just solving the challenge of business process re-engineering. In fact, in this case the process is hardly changed.

Once you know what key information you need to capture, it’s time to let the UX (user experience) designer do her thing. My colleague Sandra did a tremendous job with the key users, to design screens for both capturing and displaying information. There has to be a natural order in the information that fits the way of working in the business. This means defining where on the screen a content component is positioned for a particular role (yes, different roles will typically lead to different positioning…), which content components need just plain text formatting and which need rich text to be able to add lists, mark text bold or even include hyperlinks but on the other hand prevent the usage of fonts other than what the corporate style-guide dictates. It means defining where you need to restrict input to predefined taxonomies (or just simple drop-down boxes populated with values) and where you need supporting wizards. A sample of the latter is one where the user provides answers and numbers after which the system draws a conclusion that is used as input for the decision. To cut a long story short, information with a good user experience will help to make the transition into component based authoring smooth.

Another key aspect is the transition from paper to digital. A topic on its own. In our project we opted for a gradual transition because it’s more than a business process change to replace meetings full of annotated documents, prepared off-line over the weekend, with information accessed digitally through tablets and laptops. As an intermediate, the individually authored content components are aggregated in PDF/A documents. These documents are available for on-line reading as well as printing. It’s now up to the business themselves to execute the behavioural change process. In the mean time they can still print and scribble away where and whenever they want.

The third aspect I want to mention is archiving. Although it should be part of your business process re-engineering, it typically isn’t. Too often archiving is not seen as a business process. But even if it is, it’s a beast of its own. Still today it is common practice to archive ‘just’ documents. With component based authoring, you can no longer think in terms of archiving documents. Neither can you think in terms of archiving these content components on their own. They have relationships with other content components and together they have meaning. A content component that holds the annotation of an approval, only has meaning in its context. Archiving thus needs to evolve into Contextual Archiving whereby containers are archived and these containers include the appropriate content components as well as their relationships. Rethinking needs to be done around the purpose of the archival and the retention policies. How can you meet the archival goals for a case if key information in that case needs to be destroyed before the case itself gets destroyed? And what will regulators say when you include a content component into multiple containers which are managed independently and whereby not all (logical) instances of the content components are destroyed simultaneously? When you think about it, component based authoring reveals what has been hidden under the covers of a Word document for a long time: we didn’t manage the information but only the container that carried that information…

Times are changing in the ECM playing field. New ways of working, progressing technology, distributed collaboration and blurring boundaries pave the way into an interesting future. Next-Gen ECM / Next-Gen Information Management… Welcome into my world!

 

This post also appeared on LinkedIn.

Day 4 and 3 before it begins

May 3rd, 2015 Comments off

Yesterday I missed the opportunity to write my blog. Packing was on the menu for the evening. The past few days I could take those 30 minutes to write my blog, but now I had to make sure all items for our booth where packed and ready.

Two laptops, a hub, a lot of flyers and some nice give-aways. Thought I was ready but then I remembered, I needed to finish the last tests of our Office 365 demo with SPA4D and our SharePoint LSQM integration solution. The first is easy. I have given this demo probably 50 times now and all with great success. Our integration with LSQM is a different matter. We are just releasing this together with the Live Science team of EMC – EDC.

The business case is simple but perfect: within a pharma company, a large group of users need to read the SOP’s and other important documents. This needs to be in the audit trail. This TBR (To Be Read) is a very basic function within Life Science. Normally the user needs a full LSQM license and needs to be trained how to operate it. That is not easy, as they might use the system only 2-6 times a year. On the other side, most of these users use SharePoint, they like it, understand it and are fine working with it. So the task is simple: create the TBR function with SPA4D and the answer is perfect. A simple task for users to perform and after the sign-off, a record in the audit trail is added for this action.

But that was the easy business case. What is much more interesting is the ability to service all partners in the life science ecosystem of a company. More and more pharma companies are just managing the process and out-sourcing a lot of work to partner companies. These partner companies come in different sizes and shapes, but also both in very tight or very loose relationships. But for almost all of these companies it is mandatory that they need to be able to read, comment and sometimes edit or create regulatory documents. This demand calls for a set of options a company can select from.

1) if the partner is fully trusted and you have a full working relationship, you might want that partner to have direct access to a subset of documents within Documentum. This needs to be a much simpler interface with preferably a lot cheaper cost-base as these users might change frequently. The interface should be simple and easy to use. The access needs to be possible within the extranet of the pharma company or via a cloud based solution like Office365.

2) if you work on a less frequent bases or less intensive manner with the partner, you might decide that the partner does not need to have full access to the site, but only read only access to a part of the site and should be able to submit documents to be added to your quality system. Again this should be a cost effective interface and simple to use. Because of the more limited relationship it should suffice that only the high level actions from the partner are captured in the audit trail, but versions and revisions should be fully available.

3) if it is a one-time or incidental partner, the partner should only get a copy of the relevant document(s) and should have a communication that is very controlled when documents are added to the system.

And this all makes together:SPA4LSQM-Partner eXchange (PX).

Within the easy to use SharePoint Interface you can decide what level of trust you give to a certain partner and configure the level of access to your QM solution. Trusted partners will get access to the full browse app-part of SPA4D to manage the documents they are entitled to and partners with less of a relationship will get only read-only access to Documentum and can submit documents via a process within a normal SharePoint library without having direct access to Documentum. If you want to make the integration even more loosely coupled you could share the documents with the partner via OneDrive for Business and not even give the partner access to the SharePoint environment, but still control the documents.

All very powerful and very good to demo.  So finally at 1.30 am all was tested and I was ready to go. Now I’m sitting in a Delta plane for the last hour before we touchdown in Vegas after a long, long flight. Hope to see you and let me impress you with a good demo of SPA4LSQM or join us in the raffle for a very nice toy.

Day 8 in the countdown and UI is key for Case Management

April 27th, 2015 1 comment

and the story continues…

Today is King’s Day in The Netherlands. A good day to dress in orange and have some fun. One of the ‘fun’ things is that everybody is allowed to sell their junk. A garage sale only with everybody together in one street on little carpets. In theory it is for kids but the parents control the cash. 🙂 Walking with my seven year old daughter and seeing her rushing through the stuff to find the perfect thing, I could not help and drift away to my previous blog. User Experience and case management.

Walking with hundreds of people in one little street and looking at hundreds of carpets with stuff, how do I see what I need and where I really should steer away from? There are some basic rules. If it is dirty, stay away. If it is boy stuff, probably not interesting; if it is all black and army green, same thing. If it is pink, white, light blue, stop and have a look; if there are two girls in the age of 10-14 sitting on the carpet, same thing. So in less that an hour we where able to ‘do’ the street and my daughter was some good stuff richer.

And doing good case management is all about this. How can I, as a designer, setup a page of a case or a task, in a way that the person looking at it can easily make a judgement on the case within seconds. Working with our user designers at Informed Consulting, I notice they use these same concepts I just described to create the PERFECT page:

  • Simple and serene look and feel;
  • Try to identify blocks of data that have some sort of understandable relationship within the whole case/task;
  • Use colors and/or icons to show states and actions;
  • Distinguish between viewing and editing;
  • ‘Important’ stuff should be in the top center;

And the list is longer, but when a good UI expert is finished, it all sounds so natural, so logical. It is super but sometimes also a bit frustrating to see the reactions of the users. I did spend hours and hours, to define all the requirements SMART and good, came up with the perfect solution and set of functions needed per role. But only when they have seen our mock-up, the users are getting excited: This is what I want, this is what we need! When do we get it?

Suddenly, that system that helps them do their tasks in the way the company wants them to, is actually fun to use and simple and easy. Things I did not hear a lot when developing a WebTop solution.

At our booth in the Momentum area we are showing our great products SPA4D and LoBConnect, but if you are interested in good xCP2 design or a good mock-up, please step up to our booth and I will show some great examples.

9 days to go to EMCW: why is case management so cool?

April 26th, 2015 1 comment

I was just trying to finish my last tasks before World. One important task is finishing a mock-up for a client. A mock-up for their new xCP 2.1 application. Doing this I more and more wonder about this new concept CaseManagement. Is this just an other buzz word or does it realy introduce a new paradigm in ECM?

The basic functional difference is the ability to create an object with relations, transactions (sorry have to say stateless processes, very good name……?) and have structured datasets (contentless objects, another perfect name to explain to an end user….?). All nice technical functions that I really like but does it make that much difference to the end user? Is this the ECM chicken with the golden egg or just some extra nice modern features?

Where it gets interesting is looking at the challenges of a case. Simply put, a case is a large amount of related information about a set of tasks that have to be performed or a goal that has to be reached. The challenge for every user within a case management solution is the overview. How do I see within one glimpse what the case is about and what am I supposed to do with it? With all those great technical options, relations, transactions etc.. it is so difficult to see the tree within the forest (nice Dutch expression, but I think everybody would understand this one instantly).

So why is Case Management a gamechanger and what is the basic necessity for a good Case Management system? In the end it is easy, maybe not for the average Documentum consultant, but the answer is: USER EXPERIENCE. Simple but so true. To be able to give an end user a system that is workable, it needs to have such a perfect user design that within seconds a caseworker knows what it all is about.

A normal Documentum consultant who was used to work with WebTop did not really know or want to know what User Design was, for the simple reason that designing any good user experience in WebTop was a challenge or should I say impossible.

But now we have xCP. This gives the designer a real flexible tool to fully design the interface and give the person the right display of information to work very efficiently and like what they are doing. The interaction, or should I use an other buzz word, the agile approach, you can have with the usergroup before you start to create a technical solution, but simply create a mock-up is baffling. Users cannot wait to get the system, workshops are actually fun to do and the results using tools like Axure are super. So far my thoughts about the new xCP and tomorrow some more detail about the options of a good design. (As far as this simple Documentum consultant understands it).

Documentum Dump and Load limitations

March 11th, 2015 Comments off

Lately I’ve been involved in a project where we used Documentum’s dump/load feature to copy a lot of documents from one repository to another. We successfully copied millions of documents, folders and other objects, but this success did not come easy. In this blog I would like to share some of the issues we had for the benefit of others using dump and load.

A standard tool

Dump and load is a tool that can be used to extract a set of objects from a Documentum repository into a dump file and load them into a different repository. Dump and load is part of the Documentum Content Server. This means it can be used with any Documentum repository in the world. The tool is documented in the Documentum Content Server Administration and Configuration Guide (find it here on the EMC Support site). The admin guide describes the basic operation of dump and load, but does not discuss its limitations. There is also a good Blue Fish article about dump and load that provides a bit more background.

A fragile tool

Dump and load only works under certain circumstances. Most importantly, the repository must be 100% consistent, or a dump will most likely fail. So my first tip: always run dm_clean, dm_consistencychecker and dm_stateofdocbase jobs before dumping and fix any inconsistencies found.

Dump Limitations

The dump tool has limitations. Dump can be instructed to dump a set of objects using a DQL query. The dump tool will run the query and dump all selected objects. It will also dump all objects that the selected objects reference. That includes the objects ACLs, folders, users, groups, formats, object types, etc.etc. This is done in an effort to guarantee that the configuration in the target repository will be ok for the objects to land. This feature causes a lot of trouble, especially when the target repository has already been configured with all the needed object types, formats, etc. It causes a 100 object dump to grow into a dump of thousands of objects, slowing the dump and load process. Worse, the dump tool will dump any objects that are referenced from the original objects by object ID. This causes the folder structure for the selected documents to be included as well as the content objects, but it can also cause other documents to be included, including everything that these documents reference (it it s recursive process). This method can backfire, for instance if you select audit trail objects for instance, all objects that they reference will be included in the dump.
Now this would not have been so bad if the dump tool had not had size limitations, but it does. We found for instance that it is impossible to dump a folder that has more than 20.000 objects in it (though your mileage may vary). The dump tool just fails at some point in the process. We discussed it with EMC Support and their response was that the tool has limitations that you need to live with.
As another example we came across a repository where a certain group had many supergroups. This group was a member of more than 10.000 other groups. This was also too much for the dump tool. Since this group was given permissions in most ACLs, it became impossible to do any dumps in that repository. In the end we created a preparation script that removed this group from the other groups and a post-dump script to restore the group relations.

Load Limitations

The load tool has its own limitations. Most importantly we found that the bigger the dump file, the slower the load. This means that a dump file with 200.000 objects will not load in twice the time it takes to load 100.000 objects, it will take longer. We found that in our client’s environment we really needed to keep the total object count of the dumps well below 1 million, or the load time would go from hours to days. We learned the hard way when we had a load fail after 30 hours and we needed to revert it and retry.
Secondly, objects may be included in multiple dump files, for instance when there are inter-document relations. For objects like folders and types this is fine, the load tool will see that the object already exists and skip it. Unfortunately this works differently for documents. If a document is present in 3 dump files, the target folder will hold 3 identical documents after they have been loaded. Since you have no control over what is included in a dump file and you cannot load partial dump files, there is little you can do to prevent these duplications. We’ve had to create de-duplication scripts to resolve this for our client. We also found that having duplicates can mean that the target docbase can have more documents than the source and that the file storage location or database can run out of space. So for our production migration we temporarily increased the storage space to prevent problems.
Another limitation concerns restarting of loads. When a load stops half way through, it can be restarted. However we have not seen any load finish successfully after a restart in our project. Instead it is better to revert a partial load and start it all over. Revert is much quicker than loading.
Finally we found that after loading, some meta data of the objects in the target repository was not as expected. For instance some fields containing object IDs still had IDs of the source repository in them and some had NULL IDs where there should have been a value. Again we wrote scripts to deal with this.

As a final advice I would encourage you to run all the regular consistency and cleaning jobs after finishing the loading process. This includes dm_consistencychecker, dm_clean, dm_filescan, dm_logpurge etc. This will clean up any stuff left behind by deleting duplicate documents and will ensure that the docbase is in a healthy state before it goes back into regular use.

As you may guess from this post, we had an exiting time in this project. There was a tight deadline, we had to work long hours, but we had a successful migration and I am proud of everyone involved.

If you want to know more, or want to share your own experience with dump and load, feel free to leave a comment or send me an email (info@informedconsulting.nl) or tweet (@SanderHendriks).

 

Social Welfare at the ECM Workplace

January 19th, 2015 Comments off

A few months ago, Linda gave birth to her son Luca. Linda is the wife of Stephan, a colleague of mine. Curious as he is, Luca was premature when he decided that it was time to see the light of day. That by itself wasn’t any problem at all. The world was ready for him.

The birth of Luca triggered me to share a story that I tell my customers in the early days of a document management project. By now you are wondering why the birth of Luca trigger this story.

Here in the Netherlands, we have a social welfare system in place that kicks in at the early days of a pregnancy. Not only is the health of both mother and her child monitored, but the system also ensures a safe home is in place for the new born. It may sound overanxious, but one of the checks they do is to see if you have a cradle for the baby. That same social welfare system functions as a lifeline throughout your entire life until you shall come to your grave in ripe old age.

That lifeline provides the guidance, the procedures, the policies and the actions to fall back upon during your life. It’s the blueprint of the minimal life. You can still live your live to the max the way you want it, as long as you don’t underperform and drop below the minimum that the lifeline provides. It also takes into account the stages that you pass in your life. You may become a parent yourself, which gives you access to child support. You may develop a partial disability to work, which provides access to special compensation benefits. And even a basic pension is provided when you reach the age of 65+.

For us humans, the Social Welfare system provides the lower limit blueprint of our life from Cradle to Grave.

If you’ve read my previous post (Diversity at the ECM Workplace) about Connecting People to the Enterprise, you will understand that bringing and keeping your users on board requires an ECM solution that is easy to use but still honours the enterprise needs. One aspect that you need to facilitate is what I call the Social Welfare for the ECM Workplace.

Cradle to Grave is the concept that implements core information management functions, which become a lifeline throughout the entire life of your documents.

If I create a new document, the system needs to be ready for that. It needs to support the cradle. This can be done if the lifeline supports me with e.g. content types, templates, managed metadata and rule-based storage. In these early days in the life of the document, it needs the lifeline to understand whether it is going to be a contract based on the English language template. We stick more labels on the document to classify it and together that allows a document management solution to decide where the cradle should be located.

That lifeline also provides the guidance, the procedures, the policies and the actions to fall back upon during the life of the document. It will pass stages depending on the life it lives. In the infant stages you’ll see typical states like draft, and for review. In the adolescent stage the document will go up for approval, and get approved. While the document matures, it can use the supporting processes to move between these states and stages. At some point in time it might become a reference document to others which alters the access permissions as well as its information classification. Some documents will move from classified to unclassified, from internal use only to publicly available.

Like all of us, there comes a time when also the document will retire. It will be withdrawn from active service but is still available in some archived format with again changed access permissions and information classification. It may also move into a new storage location.

For managed information, laws, rules and regulations determine the length of the pension. There is no fixed rule for this, just like nobody knows how many years one is given to enjoy the old age. The harsh reality is, that it won’t last for ever. For managed information the grave implies that the information is deleted from the ECM solution or moved from the system to preserve its historical value elsewhere.

Depending on your requirements and circumstances, you determine what that lower limit is and which ‘social benefits’ you provide your users.
For managed information, Social Welfare for the ECM Workplace provides the lower limit blueprint of the life of that information from Cradle to Grave.

So, why did the birth of Luca trigger this? Because of the parallel between the Dutch Social Welfare System and the Cradle to Grave. You don’t want a fixed path for your newly born and nor should it be a one-off approach for your documents if you want to keep your users connected with your enterprise needs. But the opposite is also true. You don’t want uncontrolled chaos in both situations. It should be predictable and acknowledging that new documents get created and deleted and need to be managed in between. From Cradle to Grave.

Like the concepts of Diversity and Cradle to Grave matches perfectly in real life, as do they match perfectly in our ECM world. Take a look at SPA4D.com if you want to learn more about how we can help connect SharePoint collaboration functionality to the enterprise control of Documentum. Or watch our blog for more articles on enterprise information management.

Diversity at the ECM Workplace

November 10th, 2014 Comments off

Just the other day I was driving home from the office reflecting back at events that happened in the last few days and weeks. As always driving home is one of those precious moments where I can sit back and reflect. Sitting in the car in traffic, it finally dawned to me.

For a couple of days already I was trying to put finger on something that bothered me. I had been working on multiple engagements over the last few weeks. Some only related to EMC Documentum. Some only to Microsoft SharePoint and some included both. All were in different industries. If you wouldn’t know better, there was nothing they had in common. But there was.
Read more…

EMC World-Momentum 2014 – The final view

May 15th, 2014 Comments off

Sitting in a plane going back home after four long days of bathing in the Documentum information, discussion and fun I’m ready to make up a final view of the vision of the IIG team the challenges and opportunitees I see and whatever more was there.

First of, I have to thank the EMC and IIG marketing teams for organizing a very good venue. Overall all went more than smooze and only getting in and out of ballroom A with 12.000 people demanded a little patience. The party at TAO and the Imaging Dragons are not easily forgotten.

The fact that (almost) all sessions for us Documentum geeks were on one level, the perfectly located Momentum lounge in the middle and the seperated area in the solution pavilon made it almost as the Core freaks were not there :-).

Now about the content. Read more…