Archive

Archive for the ‘Documentum’ Category

End of day 2, no 3, and the plot thickens

July 12th, 2017 Comments off

Oops, I missed a day. The plan was… just a plan and plans are there to be changed. Sticking to the plan, I could have done a blog about hockey or rum & coke. But all Canadians know by now that I talk about field hockey and not ice hockey. So, I’m far from an expert on that subject. And the rum & coke last night, made it a major risk to write something and share it with the world.
Now that the first day of the expo is over and without plans for a night downtown with a drink or two, this is a good time to make this into a double blog.

The second half will be about the venue itself, the good, the bad and the ugly of my first day at Enterprise World after two days at the PartnerSummit. But first we need to be serious about the other products in the OpenText stack (or should I say opentext stack? If I’m correct in the new brand OpenText is all lower case??)

In this first part of the blog, I will talk about the product that I think will be a cash cow for Informed Consulting. Almost every expert I talk to within OpenText tells me that InfoArchive is the cash cow and stubborn as I am, I think of a different product. InfoArchive is not new to us. As a person who has a very high respect for Jeroen van Rotterdam (the visionary behind InfoArchive) I should be jumping to sell and implement this, but I’m not (yet). I’m not saying it is not a cash cow, but I’m saying, it is not a cash cow for Informed Consulting (yet).

Let me explain. Our focus is heavily on compliancy demanding organizations like life sciences, engineering and regional government. The management, responsible for deciding about these types of implementations is always the business, not IT. InfoArchive is a solution focused on lowering the IT cost. A pure CTO decision. So to find the right person to talk to and gain their trust, is a lengthy process. Secondly, we’ve talked to a number of our clients about InfoArchive and they all recognize the problem, even could describe in detail what they would do to solve it, but as long as the legacy systems are running there are more pressing matters at hand that need to be solved. And lastly, we at Informed Consulting already had a number of discussions about how to setup a good and structured analysis process to define how you should setup your InfoArchive metadata structure. The question being: what are the 10 to 20 top questions to ask to understand how the data from the different solutions would be structured together. You could completely separate all different content/data parts but than you lose a lot of extra value.

So, if it is not InfoArchive, what will it be? Why do you ask? This should really be a no-brainer: ANALYTICS – iHub and friends. I’m adding the perfect slide that I got in one of the sessions, because I think this picture says more than a 1,000 words.OpenText Analytics Suite

This approach to analytics is so simple and yet so perfect, that you (we) should be able to convince every customer that this is a mandatory product. The problems with the information overload is clear to anyone. The need to bring structure to the massive piles of unstructured information is also clear. The fact that you need to use the available structured data to bring more structure in unstructured data can be distilled easily from that. So what are the challenges?

  1. It should be configuration only. If you need too much IT involvement when configuring/adding new data streams to it, it will lose traction with the business and they will try other options, like MS Access, MS Excel or other stuff. I don’t know the answer from iHub yet on this, but my first impression is very good.
  2. Clear description/selection about which process to follow when adding a data stream: is it unstructured data?, does it need text analysis?, does it need interpretation?, does it need AI analysis?, etc. etc.
  3. Easy to create and adapt the user interface: With dashboards, the user experience is everything. The problem with that is that currently the standards for a good design are changing on an almost daily basis. So this should be very simple and very flexible.
  4. An easy wysiwyg editor to relate data from different streams to identify the relations between data.
  5. Pricing: for the delivered functionality, a good (high) price is very justifiable, but there is a major risk that point solutions, like repository specific reporting tools with a much lower price, will be chosen because they can deliver the base trick. So there needs to be a lightweight start module that will get the solution going to easily upgrade once the demand is there.

As I don’t know for sure any of the answers on these 5 challenges, I have my work cut out for me the coming weeks. So, the product needs to be in very bad shape not to end up in the portfolio of Informed Consulting (if we are permitted to resell and implement it 😄). A lot of food for thought.

Now for the second part: the good, the bad and the ugly of the #OTEW venue (not the actual content of the conference, but the setup). As a user conference veteran, you see that all event agencies in North America look closely at each other, and conference venues all use a number of standard concepts. So, there is not much unique on that side to report. But there are some minor points to mention:

The good:

  • Having buses and tuk-tuks with the OT logo on it makes you feel more special.
  • Being able to make your own fresh toast at breakfast for the small crowd of 4,000 eating people. I have not seen this before and it gets a big applause!

The bad:

  •  The expo hall is chaotic. All Pods are randomly filled with a product. Not only all partners are random, but there is also no structure in the OT Pod layout. So, I see all attendees scouring the room to find some info. Next time please make information groups.

The ugly:

  • No, I’m not sharing pictures of all the sweating people I saw after breakfast. It is 3 floors up, walk over the street, 2 floors down again, a nice walk and then end it all with another floor down to get to the expo. Yes elevators… but still,  this is nasty.

Going to think with my eyes closed now…

The end of day 1 – way too much information

July 9th, 2017 Comments off

My first PartnerSummit day at OpenText. Wow, that was heavy. There is so much to hear, think, discuss that I’m beat. Lying in my bed in the hotel to have some quiet (with the live band rocking downstairs😬) trying to sort all the info I gathered today and see how I can make this into a structured blog…

Maybe I need some data analytics to define structure, or get some concepts groups going to get relevance in my information search? For people who know me, they know it is not easy to structure my brain and no artificial intelligence will be able to work with this mess, so I’m stuck to my own brain and see what falls out.

The task I set out to accomplish today was to identify at least 4 products that would be a great addition to our portfolio. But after one day and way (WAY) to many products, I must admit: this is going to take a little bit longer. 😀

But the simple joke about my brain is so relevant for the journey we as a consulting company are on, that this will be the red line for this blog. The focus of Informed Consulting is very much on helping clients identify and address/manage their compliance demands. Supporting their journey of controlling the information processes that are relevant to being compliant. It does not really matter if you are in the life sciences space, an engineering company, a government agency or a bank. Our clients all have growing demands to meet their industry specific compliance demands. Until recently (1-2 years) Documentum could fulfill most of the demands and that was what we offered. But times are rapidly changing, even a mid-size pharma company has a enormous amount of info that needs to be managed and reused and just a ‘simple’ ECM system, meta-data taxonomy and search will not do the trick. We need to be able to give the client a total experience that will give them the ability to fit their compliance need in their overall EIM vision and tooling to guarantee the best experience, both for users and for the company.

Hey, that sounds familiar: Connecting people to the enterprise. The slogan that has been our vision the past 7 years is still, more than ever, alive!

But why, if we have been doing this already for years, is change needed? We live in a spectacular time. Simply put, on both sides of this equation (user and enterprise) the demands are exponentially growing. That, topped with a more than exponential growth in data, demands a total vertically focused solution, specific for the client. NO, not custom built, but a carefully packaged set of products, specifically targeted at the client’s needs. The challenging task of the new ECM Consultancy firms is not only to have the best developers or configurators. No, it is to have the best architects who can create an out-of-the-box, total solution, based on a predefined set of products, all handpicked and configured (composed) to meet the needs of that specific customer in that specific vertical market and to have an implementation team that understands the vertical and the vision of the architect and is able to deliver that before a very time sensitive deadline.

So that is what we at Informed Consulting have been doing the past years and we have a clear plan how we can deliver the much needed expertise to make that total solution. And now that Documentum is part of the rich set of products of OpenText we can rapidly deliver this total solution based on the products from OpenText combined with our tailor-made products for specific markets like our Quality Management Solution for the life sciences market.

For a life science company that total solution will include:

  • Documentum life sciences suite
  • eQMS express
  • SharePoint connector for Life Sciences for easy communication with partners in partner exchange
  • iHub for analytics to support that quick overview of all the states of the information and processes and manage and control every inspection
  • Decisiv to manage easy enterprise search not only in their ECM system but also in systems like their LIMS system. This is no eDiscovery but real AI driven enterprise search.
  • Blazon for regulator approved renditions
  • Marketing management that works with the ECM content
  • Customer communication management to streamline and audit all complaints and inspections
  • InfoArchive to really archive the old documents and make sure they stay compliant

And the list goes on and on. Sure, no client can do it all at once, but we have road map that will fit any client!

So, a great first day, with a happy partner and a SUPER road map for Documentum makes it complete.

Hope we can continue this happy flow for 6 more days!!

OpenText Enterprise World 2017 – Day 1

July 9th, 2017 1 comment

We are going into the 3th quarter of the acquisition of the ECD (Documentum) group in the OpenText family and the start of my first Enterprise World. A good time to start the first of my series of blogs around a user conference. The past 20 years it was Momentum or EMC World and I’m more than excited that it is now a software only conference and fully focused on EIM (for me still the extension of ECM).

So, in the blogs for the next 7 days I will focus on one aspect of the great OpenText portfolio and the things that will make my customer smile (or worried, if I see a challenge).

But how do I feel? What do I think of the merger so far? Questions I get asked more than daily and a good start of the blog series. In the beginning, I was careful with my answer. My company depends for about 65% on Documentum consultancy and products, so if this would suddenly disappear that would be not so good😊.

But now going into the 3th quarter of the game and the first full physical year of OpenText my answer is clear and open.

WHY DID THIS NOT HAPPEN SOONER!!

I think I was not unhappy as EMC partner. The ECD team was kept relatively separate from the rest and when you talked with the real architects of EMC they actually understood what ECM Software could bring to the whole information lifecycle vision of EMC.
But in the end the financial figures are clear. The hard dollars that ECD could bring to the table where not enough to have a loud voice and demand room for the much-needed choices. At OpenText everybody talks software for managing your enterprise information. You don’t need a loud voice to get someone to listen to the opportunities that are there in the ECM space.
The product managers from OpenText that I talk to from old ECD are all almost scared about the amount of help, support and vision they get from C-level management.

Is it all good? No this is real life, nothing is all perfect (except my wife and children😄).

Not all good product managers came or stayed with OpenText. Rohit Ghai is unique and he will be missed to support and fight for the Documentum vision. And last, not all products will have the same focus as they used to have.

But Jeroen, stop looking at the past 22 years and look at the great opportunities that lie ahead. The coming week I will focus on a number of the great opportunities that are the result of this merger and the chances it will give me and my team to offer to our clients.

A couple are simple:

– Documentum and analytics. A marriage that will last, but I will touch on that in a separate blog.

– Extended ECM for Documentum: OpenText (Muhi) WHY NOT, THIS SHOULD BE A NO BRAINER!

– Customer Experience Management using content from Documentum.

– LEAP App’s working with Content Suite content

– AppWorks working with Documentum content

Like a good discussion? Disagree or just want a nice talk? Come to our pod in the expo or respond to this blog or one of the next to follow.

#MMTM16: Case management: The good, the bad and the ugly

April 27th, 2016 Comments off

People who have read my previous blogs know I have a soft spot for xCP and case management according to Documentum. The past months I have wondered why this is.

The first and easy answer is the fact that probably more that 70% of the solutions I personally have implemented for my clients, where some sort of case management implementation. The next question I asked myself was: What do you mean: case management? It was all document management! What makes it different from the other solutions? Thinking more and more about that question made it clearer where my soft spot comes from. IMG_0797_small

People who work with me will confirm: I am a bit more than a little chaotic. To be able to function it is mandatory for me, to have an easy high level overview of all the stuff that is hanging somewhere on my to-do list. Case management is (in my eyes) exactly that functionality. Give every knowledge worker their own dashboard, with what is important for them at that point in time. And what I see in my day to day encounters with end users is that demand, that thirst, for that overview from every knowledge worker.

IMG_0820_smallIt is never about one document only. It is always about a group of information items that have, at this specific point in time, a relation with each other and this relation has a certain current status, that makes it important to show to me (a knowledge worker) right now. I need to be able to drill down to the specific pieces of information and change all pieces at once, separate or in combinations. This has to do with the fact that most (but not all) knowledge workers don’t work on one large document, but on a lot of separate pieces of information that need to be handled quickly. When you are part of the quality authors of a life sciences company or managing all assets of a large power-plant (my other 30% of the solutions I implemented), having a real document centric management system, that has the focus on that specific document and its related documents, is very important and demands an EDMS like D2. All the rest of us need xCP.

In my experience, all other types of document management systems will have way more to do with case management than with real, pure document centric management: Working on pieces of information that have some sort of relation with each other during their existence.

Case management is all about a good dashboard that shows you the right information for the task/action you want to perform.

Hand drawing Content flow chart on transparent wipe board.

With TaskSpace, Documentum took the first leap into the real case management world and showed that it is way better to have a Case Management solution that is built on the foundation of an ECM system than a Case Management system solely based on a relation database. This first go at a case management solution was good, but lacked a good and consistent developing environment and a flexible and very user friendly interface. (Beside some annoying bugs in the core)

And then came xCP2. The idea is so simple, but so great that I really jumped with excitement when I found this. This is really the vision we all were hoping for from EMC-ECD (IIG at that time). This is the good in my story. The product team, who came up with this approach, should be decorated :-). Sure, the 2.0 version was far from perfect, it had too much issues and lacked some functionality to make easy deployment and testing possible, but it was clear that this is the direction that Case Management needs to go.

With the new version coming up, and the change in deployment strategy, ECD is taking the right approach to make this a very stable and easy to implement system.

IMG_0821_smallBut there is still a bad. This has to do with the fundament of information classification and the way Documentum is structured. It is easy to create a great app in xCP Designer and to make the perfect dashboard and underlying supporting pages. It is relative easy to make a workable deployment strategy to deploy new features and solve bugs without too much interference for end users. But once in production with the number of cases growing rapidly, that great dashboard becomes slow, slower, the slowest… At first, you have happy end users, who love the possibilities of designing their interface together, and the flexibility and modern look and feel you can give them. Suddenly, after a couple of months their comments are a bit more cold and distant. In the end, the solution is still good and they are happy, but you feel that the performance of their first and main screen is getting annoying.

So my hope when it comes to xCP and Momentum 2016 is that the new product team of xCP has put a lot of thought and effort in the performance improvement of the historical queries in xCP. Challenge Jeroen van Rotterdam and his baby xPlore (xDB) to make those queries super-fast. A whole xCP solution is as good as the main dashboard!!

Digital TransformationAnd then came the ugly. Don’t be alarmed ECD, this time it is nothing you need to change :-). The ugly is all about the fight between the top 5 big IT companies who like to annoy each other by downgrading the support for the others fundament. It started with Apple who did not like Microsoft Silverlight or Adobe’s Flash. The arrogance to just not support it, shows how big their ego is. But that was only the beginning. Now a lot of browser-companies don’t like Oracle and push to the limits to make the use of Java in your back-end web-application difficult or even impossible (Chrome). And last but not least Microsoft, who is doing so great in trying to be friends with everybody now that Nadella is behind the steering-wheel, still needs to show the strength to the others. JavaScript is the most common used front-end languages to create a dynamic webpage. It is the new standard for web development and the only easy way to fulfil the UX demands of the new user. But why should that be of any concern to Microsoft or Mozilla? Sure it is easy to shout about security issues and all, but in the end it is just budget that makes it not possible to make JavaScript run very fast. To see the difference in performance of an xCP application between IE10 and Firefox and Chrome is frightening. Even the new Microsoft Edge is still lacking compared to the others and we see no improvement in JavaScript speed in the new versions of Firefox. So the ugly is only something we can hope will improve but is for sure a challenge we consultants need to be aware of when implementing the next great Case Management solution in xCP.

Mmtm16. Where did the disruption go?

April 11th, 2016 Comments off

It has been almost 12 month since Rohit spoke the heavy weighting words: We need to disrupt the ECM Space. Change is needed and there needs to be an alternative for the 2de platform. A new direction for ECM!

In the first blog leading to Momentum 2016 it is a good time to reflect what has happened since that bold statement and some reflection from the years before and how Rohit got to that statement. Momentum16 will be my 21st Momentum to get to the vision I will express below and since I got in contact with Documentum in 1995 a lot has happened and I am all for a new direction or a new step in maturity of ECM, but is it that easy/doable?

P1

In 1982 when Howard Shao and his team came up with Documentum and the object-related model with a very extensive and flexible security model, it was new and changed the world of ECM. It is impressive to see that the dm_sysobject and the dm_acl are still the fundament of Documentum. But a concept of 1982? Is that still actual and in line with the ‘new normal’ of this digital age? It is good to look back and see what has happened with Documentum and why and try to make some conclusions about the best steps (according to me).

When I startedP2 with Documentum we were in the top of the client/server age. Documentum had their super client WorkSpace. A heavy duty client application with for that time very flexible interface with a lot of functionality and a more than acceptable performance. In those times performance was the main pain. The hardware and database capacity made all ECM systems slow and the immaturity of the platform made it often a challenge to get them ready for production.

In 1998 my first Momentum we were all amazed by the new concept of the browser and Documentum came with their version of an application server with a full interface in it. Whitney Tidmarsh gave a super in-depth session about the new three-tier model and RightSite and we all knew we would win the world. Documentum was ahead of the competition but maybe a bit too fast and the performance and stability of the whole stack was a challenge. Still making a solution with Documentum was so much easier that their competitors like FileNet or Open Image (Wang). Why was that? It is simple: the base was so strong and consistent that it you really could focus on the other challenges.P3

And the world changed and open source showed its face and the web became more flexible. Rightsite was becoming out dated. Documentum invented there 7 layer configuration model for the web: WDK. The idea might have been good but maintaining any changes was tricky. WebTop is still used a lot, everybody complains about the outdated interface but I have seen a lot of great implementation that really gave great ECM support to companies across the world. And why is that? I think the answer is still simple: the base is so good: dm_sysobject and dc_acl.

And now iP7n the new normal with IoT where the world demands user friendly and flexible IT, Documentum comes with D2 and xCP2 with interfaces that meet the demand for UX, flexibility and maintainability. With now the front end in control and mature we see that implementing a good and solid Documentum solution is easy if you know how to combine the perfect foundation with the flexible interface options. It seems that we are there and we can take over the world again.

 

P4

But simultaneous with a great UX everybody demands the cloud and more precise the public cloud. Jeroen van Rotterdam was very right in his statement that Documentum can do a lot but it is not a multi-tenant environment that fulfills all demands for tenant separation and control. So EMC-ECD needs to come with a new platform with new demands and possibilities. So project NextGen Server was started and somewhere last year it would change to Project Horizon. What I expected of this was that it would be so very different and new and all that great stuff but that something would not change: the base is so good: dm_sysobject and dc_acl.

11 month after the announcement of Rohit I have to say: I don’t know. I have seen a number of demo’s/video’s of Snap, Exchange, Assent, Jazz and Shelf but that is all, no release date, no playgrounds for partners etc.. So the conclusion for now is simple: Did EMC disrupt the ECM space? Not in 2015 and the most important announcement we want for MMTM16 will be about the progress and availability of the disruption: Project Horizon or what the new name is going to be…….P5

What have I seen sofar: dm_sysobject and dm_acl are gone…. There might be a building block or two that might give you some sort of basic object model but for the rest it is all XML so you are free to make a mess out of it. I’m worried that this will mean that we will not reuse the power of Documentum in its new generation and I think that will be a missed opportunity.

What is interesting to see is that in the other big win from EMC-ECD: InfoArchive they started off only with xDB (The XML-database) to archive all stuff, but before the solution came to its full potential more control and security was needed and in the end the conclusion was: We need a strong security model and the ability to define clear objects and object-structure  as powerful and flexible as in Documentum, so we just added the content server from Documentum to the mix and suddenly InfoArchive is very secure and strcutured. Why, you can guess, Documentum has it perfect dm_sysobject and dm_acl.

P6So what do I expect to hear at @MMTM16 when it comes to the public cloud? A lot about the new name for Project Horizon and a lot about the perfect new app’s that EMC-ECD has created on the platform, but hopefully also something about the perfect fundament that demands structure and control in your object configuration and security that mimics a lot like: dm_sysobject and dm_acl!! And last but not least the way we partners of EMC-ECD can reuse this potential disruption of ECM, because the only way EMC-ECD is capable of disrupting the ECM space is by allowing partners like Informed Consulting to build the prefect vertical apps that will rock the world.

What’s up next? In my next blog I’ll try to reflect my thought about Documentum xCP3.0 (or 2.3??) and what is the good, the bad and the ugly is in the new IoS case management.

 

A Case of Component Based Authoring

September 30th, 2015 Comments off

Component Based AuthoringYesterday afternoon I attended an EMC webinar about their Next Generation solutions for Life Science, when a slide passed by about Component Based Authoring. It was a different way of expressing the same subject Jeroen van Rotterdam addressed recently in his EMC Spark blog called ‘Who is using Word?‘ From that blog, comes this quote:

Then there is the trend towards targeted point solutions with very domain-specific capabilities to create these smaller chunks of content. A generic word processor is far from efficient in this scenario, and even harder to customize with the desired user experience. Content creation applications are so much more powerful in a business context and becoming less focused on text.

It’s fun to read about a trend – in this case Component Based Authoring – when you’re already practising this approach. It feels for me as if this is the only way forward in case based solutions being delivered today.

My current project is implementing an EMC xCP based solution to support a decision making process where each decision is backed by carefully build cases.

In its previous implementation, documents were the content containers. A lot of copying and rewriting was taking place. A cumbersome and error prone way of working. We didn’t investigate it, but if I were to place a bet, I would say that it’s almost a guarantee that each document is formatted uniquely and it’s highly likely that not every document contains the mandatory information. The flip-side of the coin is, that this freedom is very well received by the end-user who is using Microsoft Word, a tool perceived as very user friendly and productive (don’t get me started…), to let his creativity flow.
You could argue that the needs of the end-user are prevailing over those of the enterprise. At Informed Consulting we believe that connecting people and the enterprise should be a win-win situation and is key to success.

With the new xCP solution we’re applying Component Based Authoring and Word is now only needed for the supporting documents. Not for the key information of the case. That key information is divided into logical components and authored independently. With this approach we created a balance between both user and enterprise needs. But in order to achieve this, more is needed than just solving the challenge of business process re-engineering. In fact, in this case the process is hardly changed.

Once you know what key information you need to capture, it’s time to let the UX (user experience) designer do her thing. My colleague Sandra did a tremendous job with the key users, to design screens for both capturing and displaying information. There has to be a natural order in the information that fits the way of working in the business. This means defining where on the screen a content component is positioned for a particular role (yes, different roles will typically lead to different positioning…), which content components need just plain text formatting and which need rich text to be able to add lists, mark text bold or even include hyperlinks but on the other hand prevent the usage of fonts other than what the corporate style-guide dictates. It means defining where you need to restrict input to predefined taxonomies (or just simple drop-down boxes populated with values) and where you need supporting wizards. A sample of the latter is one where the user provides answers and numbers after which the system draws a conclusion that is used as input for the decision. To cut a long story short, information with a good user experience will help to make the transition into component based authoring smooth.

Another key aspect is the transition from paper to digital. A topic on its own. In our project we opted for a gradual transition because it’s more than a business process change to replace meetings full of annotated documents, prepared off-line over the weekend, with information accessed digitally through tablets and laptops. As an intermediate, the individually authored content components are aggregated in PDF/A documents. These documents are available for on-line reading as well as printing. It’s now up to the business themselves to execute the behavioural change process. In the mean time they can still print and scribble away where and whenever they want.

The third aspect I want to mention is archiving. Although it should be part of your business process re-engineering, it typically isn’t. Too often archiving is not seen as a business process. But even if it is, it’s a beast of its own. Still today it is common practice to archive ‘just’ documents. With component based authoring, you can no longer think in terms of archiving documents. Neither can you think in terms of archiving these content components on their own. They have relationships with other content components and together they have meaning. A content component that holds the annotation of an approval, only has meaning in its context. Archiving thus needs to evolve into Contextual Archiving whereby containers are archived and these containers include the appropriate content components as well as their relationships. Rethinking needs to be done around the purpose of the archival and the retention policies. How can you meet the archival goals for a case if key information in that case needs to be destroyed before the case itself gets destroyed? And what will regulators say when you include a content component into multiple containers which are managed independently and whereby not all (logical) instances of the content components are destroyed simultaneously? When you think about it, component based authoring reveals what has been hidden under the covers of a Word document for a long time: we didn’t manage the information but only the container that carried that information…

Times are changing in the ECM playing field. New ways of working, progressing technology, distributed collaboration and blurring boundaries pave the way into an interesting future. Next-Gen ECM / Next-Gen Information Management… Welcome into my world!

 

This post also appeared on LinkedIn.

Day 4 and 3 before it begins

May 3rd, 2015 Comments off

Yesterday I missed the opportunity to write my blog. Packing was on the menu for the evening. The past few days I could take those 30 minutes to write my blog, but now I had to make sure all items for our booth where packed and ready.

Two laptops, a hub, a lot of flyers and some nice give-aways. Thought I was ready but then I remembered, I needed to finish the last tests of our Office 365 demo with SPA4D and our SharePoint LSQM integration solution. The first is easy. I have given this demo probably 50 times now and all with great success. Our integration with LSQM is a different matter. We are just releasing this together with the Live Science team of EMC – EDC.

The business case is simple but perfect: within a pharma company, a large group of users need to read the SOP’s and other important documents. This needs to be in the audit trail. This TBR (To Be Read) is a very basic function within Life Science. Normally the user needs a full LSQM license and needs to be trained how to operate it. That is not easy, as they might use the system only 2-6 times a year. On the other side, most of these users use SharePoint, they like it, understand it and are fine working with it. So the task is simple: create the TBR function with SPA4D and the answer is perfect. A simple task for users to perform and after the sign-off, a record in the audit trail is added for this action.

But that was the easy business case. What is much more interesting is the ability to service all partners in the life science ecosystem of a company. More and more pharma companies are just managing the process and out-sourcing a lot of work to partner companies. These partner companies come in different sizes and shapes, but also both in very tight or very loose relationships. But for almost all of these companies it is mandatory that they need to be able to read, comment and sometimes edit or create regulatory documents. This demand calls for a set of options a company can select from.

1) if the partner is fully trusted and you have a full working relationship, you might want that partner to have direct access to a subset of documents within Documentum. This needs to be a much simpler interface with preferably a lot cheaper cost-base as these users might change frequently. The interface should be simple and easy to use. The access needs to be possible within the extranet of the pharma company or via a cloud based solution like Office365.

2) if you work on a less frequent bases or less intensive manner with the partner, you might decide that the partner does not need to have full access to the site, but only read only access to a part of the site and should be able to submit documents to be added to your quality system. Again this should be a cost effective interface and simple to use. Because of the more limited relationship it should suffice that only the high level actions from the partner are captured in the audit trail, but versions and revisions should be fully available.

3) if it is a one-time or incidental partner, the partner should only get a copy of the relevant document(s) and should have a communication that is very controlled when documents are added to the system.

And this all makes together:SPA4LSQM-Partner eXchange (PX).

Within the easy to use SharePoint Interface you can decide what level of trust you give to a certain partner and configure the level of access to your QM solution. Trusted partners will get access to the full browse app-part of SPA4D to manage the documents they are entitled to and partners with less of a relationship will get only read-only access to Documentum and can submit documents via a process within a normal SharePoint library without having direct access to Documentum. If you want to make the integration even more loosely coupled you could share the documents with the partner via OneDrive for Business and not even give the partner access to the SharePoint environment, but still control the documents.

All very powerful and very good to demo.  So finally at 1.30 am all was tested and I was ready to go. Now I’m sitting in a Delta plane for the last hour before we touchdown in Vegas after a long, long flight. Hope to see you and let me impress you with a good demo of SPA4LSQM or join us in the raffle for a very nice toy.

Day 8 in the countdown and UI is key for Case Management

April 27th, 2015 1 comment

and the story continues…

Today is King’s Day in The Netherlands. A good day to dress in orange and have some fun. One of the ‘fun’ things is that everybody is allowed to sell their junk. A garage sale only with everybody together in one street on little carpets. In theory it is for kids but the parents control the cash. 🙂 Walking with my seven year old daughter and seeing her rushing through the stuff to find the perfect thing, I could not help and drift away to my previous blog. User Experience and case management.

Walking with hundreds of people in one little street and looking at hundreds of carpets with stuff, how do I see what I need and where I really should steer away from? There are some basic rules. If it is dirty, stay away. If it is boy stuff, probably not interesting; if it is all black and army green, same thing. If it is pink, white, light blue, stop and have a look; if there are two girls in the age of 10-14 sitting on the carpet, same thing. So in less that an hour we where able to ‘do’ the street and my daughter was some good stuff richer.

And doing good case management is all about this. How can I, as a designer, setup a page of a case or a task, in a way that the person looking at it can easily make a judgement on the case within seconds. Working with our user designers at Informed Consulting, I notice they use these same concepts I just described to create the PERFECT page:

  • Simple and serene look and feel;
  • Try to identify blocks of data that have some sort of understandable relationship within the whole case/task;
  • Use colors and/or icons to show states and actions;
  • Distinguish between viewing and editing;
  • ‘Important’ stuff should be in the top center;

And the list is longer, but when a good UI expert is finished, it all sounds so natural, so logical. It is super but sometimes also a bit frustrating to see the reactions of the users. I did spend hours and hours, to define all the requirements SMART and good, came up with the perfect solution and set of functions needed per role. But only when they have seen our mock-up, the users are getting excited: This is what I want, this is what we need! When do we get it?

Suddenly, that system that helps them do their tasks in the way the company wants them to, is actually fun to use and simple and easy. Things I did not hear a lot when developing a WebTop solution.

At our booth in the Momentum area we are showing our great products SPA4D and LoBConnect, but if you are interested in good xCP2 design or a good mock-up, please step up to our booth and I will show some great examples.

9 days to go to EMCW: why is case management so cool?

April 26th, 2015 1 comment

I was just trying to finish my last tasks before World. One important task is finishing a mock-up for a client. A mock-up for their new xCP 2.1 application. Doing this I more and more wonder about this new concept CaseManagement. Is this just an other buzz word or does it realy introduce a new paradigm in ECM?

The basic functional difference is the ability to create an object with relations, transactions (sorry have to say stateless processes, very good name……?) and have structured datasets (contentless objects, another perfect name to explain to an end user….?). All nice technical functions that I really like but does it make that much difference to the end user? Is this the ECM chicken with the golden egg or just some extra nice modern features?

Where it gets interesting is looking at the challenges of a case. Simply put, a case is a large amount of related information about a set of tasks that have to be performed or a goal that has to be reached. The challenge for every user within a case management solution is the overview. How do I see within one glimpse what the case is about and what am I supposed to do with it? With all those great technical options, relations, transactions etc.. it is so difficult to see the tree within the forest (nice Dutch expression, but I think everybody would understand this one instantly).

So why is Case Management a gamechanger and what is the basic necessity for a good Case Management system? In the end it is easy, maybe not for the average Documentum consultant, but the answer is: USER EXPERIENCE. Simple but so true. To be able to give an end user a system that is workable, it needs to have such a perfect user design that within seconds a caseworker knows what it all is about.

A normal Documentum consultant who was used to work with WebTop did not really know or want to know what User Design was, for the simple reason that designing any good user experience in WebTop was a challenge or should I say impossible.

But now we have xCP. This gives the designer a real flexible tool to fully design the interface and give the person the right display of information to work very efficiently and like what they are doing. The interaction, or should I use an other buzz word, the agile approach, you can have with the usergroup before you start to create a technical solution, but simply create a mock-up is baffling. Users cannot wait to get the system, workshops are actually fun to do and the results using tools like Axure are super. So far my thoughts about the new xCP and tomorrow some more detail about the options of a good design. (As far as this simple Documentum consultant understands it).

Documentum Dump and Load limitations

March 11th, 2015 Comments off

Lately I’ve been involved in a project where we used Documentum’s dump/load feature to copy a lot of documents from one repository to another. We successfully copied millions of documents, folders and other objects, but this success did not come easy. In this blog I would like to share some of the issues we had for the benefit of others using dump and load.

A standard tool

Dump and load is a tool that can be used to extract a set of objects from a Documentum repository into a dump file and load them into a different repository. Dump and load is part of the Documentum Content Server. This means it can be used with any Documentum repository in the world. The tool is documented in the Documentum Content Server Administration and Configuration Guide (find it here on the EMC Support site). The admin guide describes the basic operation of dump and load, but does not discuss its limitations. There is also a good Blue Fish article about dump and load that provides a bit more background.

A fragile tool

Dump and load only works under certain circumstances. Most importantly, the repository must be 100% consistent, or a dump will most likely fail. So my first tip: always run dm_clean, dm_consistencychecker and dm_stateofdocbase jobs before dumping and fix any inconsistencies found.

Dump Limitations

The dump tool has limitations. Dump can be instructed to dump a set of objects using a DQL query. The dump tool will run the query and dump all selected objects. It will also dump all objects that the selected objects reference. That includes the objects ACLs, folders, users, groups, formats, object types, etc.etc. This is done in an effort to guarantee that the configuration in the target repository will be ok for the objects to land. This feature causes a lot of trouble, especially when the target repository has already been configured with all the needed object types, formats, etc. It causes a 100 object dump to grow into a dump of thousands of objects, slowing the dump and load process. Worse, the dump tool will dump any objects that are referenced from the original objects by object ID. This causes the folder structure for the selected documents to be included as well as the content objects, but it can also cause other documents to be included, including everything that these documents reference (it it s recursive process). This method can backfire, for instance if you select audit trail objects for instance, all objects that they reference will be included in the dump.
Now this would not have been so bad if the dump tool had not had size limitations, but it does. We found for instance that it is impossible to dump a folder that has more than 20.000 objects in it (though your mileage may vary). The dump tool just fails at some point in the process. We discussed it with EMC Support and their response was that the tool has limitations that you need to live with.
As another example we came across a repository where a certain group had many supergroups. This group was a member of more than 10.000 other groups. This was also too much for the dump tool. Since this group was given permissions in most ACLs, it became impossible to do any dumps in that repository. In the end we created a preparation script that removed this group from the other groups and a post-dump script to restore the group relations.

Load Limitations

The load tool has its own limitations. Most importantly we found that the bigger the dump file, the slower the load. This means that a dump file with 200.000 objects will not load in twice the time it takes to load 100.000 objects, it will take longer. We found that in our client’s environment we really needed to keep the total object count of the dumps well below 1 million, or the load time would go from hours to days. We learned the hard way when we had a load fail after 30 hours and we needed to revert it and retry.
Secondly, objects may be included in multiple dump files, for instance when there are inter-document relations. For objects like folders and types this is fine, the load tool will see that the object already exists and skip it. Unfortunately this works differently for documents. If a document is present in 3 dump files, the target folder will hold 3 identical documents after they have been loaded. Since you have no control over what is included in a dump file and you cannot load partial dump files, there is little you can do to prevent these duplications. We’ve had to create de-duplication scripts to resolve this for our client. We also found that having duplicates can mean that the target docbase can have more documents than the source and that the file storage location or database can run out of space. So for our production migration we temporarily increased the storage space to prevent problems.
Another limitation concerns restarting of loads. When a load stops half way through, it can be restarted. However we have not seen any load finish successfully after a restart in our project. Instead it is better to revert a partial load and start it all over. Revert is much quicker than loading.
Finally we found that after loading, some meta data of the objects in the target repository was not as expected. For instance some fields containing object IDs still had IDs of the source repository in them and some had NULL IDs where there should have been a value. Again we wrote scripts to deal with this.

As a final advice I would encourage you to run all the regular consistency and cleaning jobs after finishing the loading process. This includes dm_consistencychecker, dm_clean, dm_filescan, dm_logpurge etc. This will clean up any stuff left behind by deleting duplicate documents and will ensure that the docbase is in a healthy state before it goes back into regular use.

As you may guess from this post, we had an exiting time in this project. There was a tight deadline, we had to work long hours, but we had a successful migration and I am proud of everyone involved.

If you want to know more, or want to share your own experience with dump and load, feel free to leave a comment or send me an email (info@informedconsulting.nl) or tweet (@SanderHendriks).