Day 6: What is the best interface?

April 29th, 2015 1 comment

Today I had a great discussion I’d like to share with you. It was with a colleague about the best interface for end users. The actual point started a couple of weeks ago in a discussion we had with an external independent business consultant. I’ll try to be gentle, but people who know me, know that I have my doubts about business consultants without any expertise in Documentum or SharePoint who, for way to much money, advise clients the most stupid ideas.

But in this case I got a call from a very worried client. How do I make my users like the Documentum interface? They are now working with SAP and suddenly they will need to use Documentum. I off course started with the simple statement that everybody who is using SAP will start to cry out of joy when they see a good user interface like xCP or D2, but I restrained myself and focused on the underlying question.

Which user/roles are we talking about?, what are they doing and why do they need Documentum or SAP? That was a good question, the client did not have the answer and he needed to go back to the extremely intelligent business consultant who did not understand the question because we had to make a decision what the preferred interface was……..

And here is what this blog is all about. Never ever try to come up with a preferred interface for the company. But a preferred interface for a role within the company I like. Look at what people in that role are doing and what interface should fit them best and make sure that all other information can be accessed via that system. The client loved this approach and the business consultant was too busy advising an other client and was never seen again..

But making this vision (the one perfect interface per role) happen, is much easier said than done. How do I make my Documentum data accessible in SAP, Peoplesoft, SharePoint or any other business application without spending thousands of euro’s per year on the integration of my systems? Most new systems have some sort of webservice option to share information, but often it is needed to access the data from the client side and with every update of one of the systems the integration has to be redone or repurchased.

It gets even more difficult if a user needs to be able to make decisions during the collection of data from the other system. Suddenly the integration is not one process but multiple separated processes that have to be started after a choice of intervention of the user. Not an easy choice and for sure very expensive.

We at Informed Consulting got that question often from our clients and finally decided that building a product might be a good idea :-). Now there is LoBConnect. It is not the replacement of your Enterprise Service Bus, nor should it replace your archiving solutions from SAP. But if you have identified that the best interface for a role is not Documentum (can be any from SAP, Oracle, Peoplesoft, Unit4, etc..) and that role needs to interact with Documentum frequently or occasionally, have a look at LoBConnect. It might, in a very simple way, solve your basic integration demands from an end-user perspective.

It is easy to configure any action or any interface, no programming and understands most Documentum interfaces.

So don’t let the fact that users need to be able to access information from multiple systems limit your vision on the best interface for that specific role. Drop by our booth next week at EMC World and we’ll show you this simple wonder.

7 on the countdown: The ultimate SpringBoard

April 28th, 2015 Comments off

Today is Tuesday and within a week EMCWorld will be in full swing. Todays blog is all about the new EMC concept for partners: SpringBoard. EMC acts as the middleman to connect the different partners to deliver the perfect solution for their customers. EMC understands that this is the best way forward if they want to increase the license sales of their product. Do not compete with partner products but embrace them and make a great joint venture to the benefit of all.

We at Informed Consulitng are all about this concept. Even before SpringBoard we had very good relations with a large group of partners who are often also competitors. In the end we think that when we work together there will be a lot more for all than if we try to compete hard. It might not be the only way to get the biggest piece of the pie, but for sure the pie will be a lot (LOT) bigger.

Today we released a perfect example for the SpringBoard concept. With SPA4D we created the perfect collaboration interface for Documentum. Using SharePoint as THE interface of choice for a large part of the organization it is only logical to have an easy to use connection to the information stored in Documentum and be able to leverage the good collaboration functions like simultaneous editing, OneDrive or sharing from SharePoint on your Documentum information.

But only having SharePoint functions available does make your Documentum solution limited. We already added things like renditions, full versioning (including meta-data), workflow, etc.. but where we got stuck is the whole concept of annotations. How can you support the user with RELATE permissions to be able to leverage that unique Documentum concept and enable them to create the standard Documentum annotations?

The answer was simple and perfect. We already had a close working relationship with Aerow. They have their certified solution ARender as a replacement for PDF-annotation services, as that is eol. This easy to use and configurable solution has a perfect webservice handle to open a document and start the read/annotate process.

Within days we added this function to our SPA4D product and without any programming on both sides this function is now part of the solution package. On premise and in the cloud. So now even more Informed Consulting and Aerow join forces to deliver the best Documentum solution to our customers.

So come see us at the booth at EMCWorld and we will show you this perfect SpringBoard example!

Categories: EMC Elect 2015, EMC World 2015, SPA4D Tags:

Day 8 in the countdown and UI is key for Case Management

April 27th, 2015 1 comment

and the story continues…

Today is King’s Day in The Netherlands. A good day to dress in orange and have some fun. One of the ‘fun’ things is that everybody is allowed to sell their junk. A garage sale only with everybody together in one street on little carpets. In theory it is for kids but the parents control the cash. 🙂 Walking with my seven year old daughter and seeing her rushing through the stuff to find the perfect thing, I could not help and drift away to my previous blog. User Experience and case management.

Walking with hundreds of people in one little street and looking at hundreds of carpets with stuff, how do I see what I need and where I really should steer away from? There are some basic rules. If it is dirty, stay away. If it is boy stuff, probably not interesting; if it is all black and army green, same thing. If it is pink, white, light blue, stop and have a look; if there are two girls in the age of 10-14 sitting on the carpet, same thing. So in less that an hour we where able to ‘do’ the street and my daughter was some good stuff richer.

And doing good case management is all about this. How can I, as a designer, setup a page of a case or a task, in a way that the person looking at it can easily make a judgement on the case within seconds. Working with our user designers at Informed Consulting, I notice they use these same concepts I just described to create the PERFECT page:

  • Simple and serene look and feel;
  • Try to identify blocks of data that have some sort of understandable relationship within the whole case/task;
  • Use colors and/or icons to show states and actions;
  • Distinguish between viewing and editing;
  • ‘Important’ stuff should be in the top center;

And the list is longer, but when a good UI expert is finished, it all sounds so natural, so logical. It is super but sometimes also a bit frustrating to see the reactions of the users. I did spend hours and hours, to define all the requirements SMART and good, came up with the perfect solution and set of functions needed per role. But only when they have seen our mock-up, the users are getting excited: This is what I want, this is what we need! When do we get it?

Suddenly, that system that helps them do their tasks in the way the company wants them to, is actually fun to use and simple and easy. Things I did not hear a lot when developing a WebTop solution.

At our booth in the Momentum area we are showing our great products SPA4D and LoBConnect, but if you are interested in good xCP2 design or a good mock-up, please step up to our booth and I will show some great examples.

9 days to go to EMCW: why is case management so cool?

April 26th, 2015 1 comment

I was just trying to finish my last tasks before World. One important task is finishing a mock-up for a client. A mock-up for their new xCP 2.1 application. Doing this I more and more wonder about this new concept CaseManagement. Is this just an other buzz word or does it realy introduce a new paradigm in ECM?

The basic functional difference is the ability to create an object with relations, transactions (sorry have to say stateless processes, very good name……?) and have structured datasets (contentless objects, another perfect name to explain to an end user….?). All nice technical functions that I really like but does it make that much difference to the end user? Is this the ECM chicken with the golden egg or just some extra nice modern features?

Where it gets interesting is looking at the challenges of a case. Simply put, a case is a large amount of related information about a set of tasks that have to be performed or a goal that has to be reached. The challenge for every user within a case management solution is the overview. How do I see within one glimpse what the case is about and what am I supposed to do with it? With all those great technical options, relations, transactions etc.. it is so difficult to see the tree within the forest (nice Dutch expression, but I think everybody would understand this one instantly).

So why is Case Management a gamechanger and what is the basic necessity for a good Case Management system? In the end it is easy, maybe not for the average Documentum consultant, but the answer is: USER EXPERIENCE. Simple but so true. To be able to give an end user a system that is workable, it needs to have such a perfect user design that within seconds a caseworker knows what it all is about.

A normal Documentum consultant who was used to work with WebTop did not really know or want to know what User Design was, for the simple reason that designing any good user experience in WebTop was a challenge or should I say impossible.

But now we have xCP. This gives the designer a real flexible tool to fully design the interface and give the person the right display of information to work very efficiently and like what they are doing. The interaction, or should I use an other buzz word, the agile approach, you can have with the usergroup before you start to create a technical solution, but simply create a mock-up is baffling. Users cannot wait to get the system, workshops are actually fun to do and the results using tools like Axure are super. So far my thoughts about the new xCP and tomorrow some more detail about the options of a good design. (As far as this simple Documentum consultant understands it).

EMC Elect is getting ready for EMCW-Momentum 2015

April 25th, 2015 Comments off

It is less than 10 days until I fly to Vegas again. Ready for a week of knowledge sharing and showing the world the wonders of SPA4D and LoBConnect. It will be different from the previous years. A number of reasons:

1) We have two very cool products. We see how happy our customers are already reacting to them, but are excited how the Momentum crowd will respond.

2) I cannot wait to finally see a glimpse of all the new things of ECD: What will the new Capital Projects UI look like? Is the 3th gen platform getting the traction it deserves? Is LSQM ready for the next step and will the multi tenant solution for Life Science be a good competitor for Veeva? What about xCP and a rules engine? D2 and xCP, you see they are more and more growing to each other, but when will they become one, now or never or anything in between?

3) Being EMC Elect. I don’t know if anything will be different, but I feel obligated to write a blog per day to give my ideas and thoughts.

Please drop by our booth and share your thoughts. I’m always in for a good discussion.

What you tell me in Vegas, might end up in my blog 🙂

Documentum Dump and Load limitations

March 11th, 2015 Comments off

Lately I’ve been involved in a project where we used Documentum’s dump/load feature to copy a lot of documents from one repository to another. We successfully copied millions of documents, folders and other objects, but this success did not come easy. In this blog I would like to share some of the issues we had for the benefit of others using dump and load.

A standard tool

Dump and load is a tool that can be used to extract a set of objects from a Documentum repository into a dump file and load them into a different repository. Dump and load is part of the Documentum Content Server. This means it can be used with any Documentum repository in the world. The tool is documented in the Documentum Content Server Administration and Configuration Guide (find it here on the EMC Support site). The admin guide describes the basic operation of dump and load, but does not discuss its limitations. There is also a good Blue Fish article about dump and load that provides a bit more background.

A fragile tool

Dump and load only works under certain circumstances. Most importantly, the repository must be 100% consistent, or a dump will most likely fail. So my first tip: always run dm_clean, dm_consistencychecker and dm_stateofdocbase jobs before dumping and fix any inconsistencies found.

Dump Limitations

The dump tool has limitations. Dump can be instructed to dump a set of objects using a DQL query. The dump tool will run the query and dump all selected objects. It will also dump all objects that the selected objects reference. That includes the objects ACLs, folders, users, groups, formats, object types, etc.etc. This is done in an effort to guarantee that the configuration in the target repository will be ok for the objects to land. This feature causes a lot of trouble, especially when the target repository has already been configured with all the needed object types, formats, etc. It causes a 100 object dump to grow into a dump of thousands of objects, slowing the dump and load process. Worse, the dump tool will dump any objects that are referenced from the original objects by object ID. This causes the folder structure for the selected documents to be included as well as the content objects, but it can also cause other documents to be included, including everything that these documents reference (it it s recursive process). This method can backfire, for instance if you select audit trail objects for instance, all objects that they reference will be included in the dump.
Now this would not have been so bad if the dump tool had not had size limitations, but it does. We found for instance that it is impossible to dump a folder that has more than 20.000 objects in it (though your mileage may vary). The dump tool just fails at some point in the process. We discussed it with EMC Support and their response was that the tool has limitations that you need to live with.
As another example we came across a repository where a certain group had many supergroups. This group was a member of more than 10.000 other groups. This was also too much for the dump tool. Since this group was given permissions in most ACLs, it became impossible to do any dumps in that repository. In the end we created a preparation script that removed this group from the other groups and a post-dump script to restore the group relations.

Load Limitations

The load tool has its own limitations. Most importantly we found that the bigger the dump file, the slower the load. This means that a dump file with 200.000 objects will not load in twice the time it takes to load 100.000 objects, it will take longer. We found that in our client’s environment we really needed to keep the total object count of the dumps well below 1 million, or the load time would go from hours to days. We learned the hard way when we had a load fail after 30 hours and we needed to revert it and retry.
Secondly, objects may be included in multiple dump files, for instance when there are inter-document relations. For objects like folders and types this is fine, the load tool will see that the object already exists and skip it. Unfortunately this works differently for documents. If a document is present in 3 dump files, the target folder will hold 3 identical documents after they have been loaded. Since you have no control over what is included in a dump file and you cannot load partial dump files, there is little you can do to prevent these duplications. We’ve had to create de-duplication scripts to resolve this for our client. We also found that having duplicates can mean that the target docbase can have more documents than the source and that the file storage location or database can run out of space. So for our production migration we temporarily increased the storage space to prevent problems.
Another limitation concerns restarting of loads. When a load stops half way through, it can be restarted. However we have not seen any load finish successfully after a restart in our project. Instead it is better to revert a partial load and start it all over. Revert is much quicker than loading.
Finally we found that after loading, some meta data of the objects in the target repository was not as expected. For instance some fields containing object IDs still had IDs of the source repository in them and some had NULL IDs where there should have been a value. Again we wrote scripts to deal with this.

As a final advice I would encourage you to run all the regular consistency and cleaning jobs after finishing the loading process. This includes dm_consistencychecker, dm_clean, dm_filescan, dm_logpurge etc. This will clean up any stuff left behind by deleting duplicate documents and will ensure that the docbase is in a healthy state before it goes back into regular use.

As you may guess from this post, we had an exiting time in this project. There was a tight deadline, we had to work long hours, but we had a successful migration and I am proud of everyone involved.

If you want to know more, or want to share your own experience with dump and load, feel free to leave a comment or send me an email (info@informedconsulting.nl) or tweet (@SanderHendriks).

 

Social Welfare at the ECM Workplace

January 19th, 2015 Comments off

A few months ago, Linda gave birth to her son Luca. Linda is the wife of Stephan, a colleague of mine. Curious as he is, Luca was premature when he decided that it was time to see the light of day. That by itself wasn’t any problem at all. The world was ready for him.

The birth of Luca triggered me to share a story that I tell my customers in the early days of a document management project. By now you are wondering why the birth of Luca trigger this story.

Here in the Netherlands, we have a social welfare system in place that kicks in at the early days of a pregnancy. Not only is the health of both mother and her child monitored, but the system also ensures a safe home is in place for the new born. It may sound overanxious, but one of the checks they do is to see if you have a cradle for the baby. That same social welfare system functions as a lifeline throughout your entire life until you shall come to your grave in ripe old age.

That lifeline provides the guidance, the procedures, the policies and the actions to fall back upon during your life. It’s the blueprint of the minimal life. You can still live your live to the max the way you want it, as long as you don’t underperform and drop below the minimum that the lifeline provides. It also takes into account the stages that you pass in your life. You may become a parent yourself, which gives you access to child support. You may develop a partial disability to work, which provides access to special compensation benefits. And even a basic pension is provided when you reach the age of 65+.

For us humans, the Social Welfare system provides the lower limit blueprint of our life from Cradle to Grave.

If you’ve read my previous post (Diversity at the ECM Workplace) about Connecting People to the Enterprise, you will understand that bringing and keeping your users on board requires an ECM solution that is easy to use but still honours the enterprise needs. One aspect that you need to facilitate is what I call the Social Welfare for the ECM Workplace.

Cradle to Grave is the concept that implements core information management functions, which become a lifeline throughout the entire life of your documents.

If I create a new document, the system needs to be ready for that. It needs to support the cradle. This can be done if the lifeline supports me with e.g. content types, templates, managed metadata and rule-based storage. In these early days in the life of the document, it needs the lifeline to understand whether it is going to be a contract based on the English language template. We stick more labels on the document to classify it and together that allows a document management solution to decide where the cradle should be located.

That lifeline also provides the guidance, the procedures, the policies and the actions to fall back upon during the life of the document. It will pass stages depending on the life it lives. In the infant stages you’ll see typical states like draft, and for review. In the adolescent stage the document will go up for approval, and get approved. While the document matures, it can use the supporting processes to move between these states and stages. At some point in time it might become a reference document to others which alters the access permissions as well as its information classification. Some documents will move from classified to unclassified, from internal use only to publicly available.

Like all of us, there comes a time when also the document will retire. It will be withdrawn from active service but is still available in some archived format with again changed access permissions and information classification. It may also move into a new storage location.

For managed information, laws, rules and regulations determine the length of the pension. There is no fixed rule for this, just like nobody knows how many years one is given to enjoy the old age. The harsh reality is, that it won’t last for ever. For managed information the grave implies that the information is deleted from the ECM solution or moved from the system to preserve its historical value elsewhere.

Depending on your requirements and circumstances, you determine what that lower limit is and which ‘social benefits’ you provide your users.
For managed information, Social Welfare for the ECM Workplace provides the lower limit blueprint of the life of that information from Cradle to Grave.

So, why did the birth of Luca trigger this? Because of the parallel between the Dutch Social Welfare System and the Cradle to Grave. You don’t want a fixed path for your newly born and nor should it be a one-off approach for your documents if you want to keep your users connected with your enterprise needs. But the opposite is also true. You don’t want uncontrolled chaos in both situations. It should be predictable and acknowledging that new documents get created and deleted and need to be managed in between. From Cradle to Grave.

Like the concepts of Diversity and Cradle to Grave matches perfectly in real life, as do they match perfectly in our ECM world. Take a look at SPA4D.com if you want to learn more about how we can help connect SharePoint collaboration functionality to the enterprise control of Documentum. Or watch our blog for more articles on enterprise information management.

Diversity at the ECM Workplace

November 10th, 2014 Comments off

Just the other day I was driving home from the office reflecting back at events that happened in the last few days and weeks. As always driving home is one of those precious moments where I can sit back and reflect. Sitting in the car in traffic, it finally dawned to me.

For a couple of days already I was trying to put finger on something that bothered me. I had been working on multiple engagements over the last few weeks. Some only related to EMC Documentum. Some only to Microsoft SharePoint and some included both. All were in different industries. If you wouldn’t know better, there was nothing they had in common. But there was.
Read more…

Configuration vs Customization vs Development

June 8th, 2014 Comments off

Everyday more customers are hesitant about development and question if custom work against SharePoint is a good idea. Often they lean towards the opinion that only out-of-the-box solutions are allowed to be created. They believe this will pose less problems when upgrading to a next version or they just have an overall no-code policy.
This poses the question of what exactly is out-of-the-box. There have been jokes that a SharePoint installation is only out-of-the-box as long as you only add content through the browser without changing anything else. There is a reason for those jokes. Out-of-the-box does not always mean there will be no problems during upgrade and code does not always mean there will be. That is why we have been thinking about what is customization, what is configuration and what is development and why we have created a vision about which should be used and when. Read more…

Categories: SharePoint Tags:

Changes to the ‘SuiteBarBrandingElementHtml’ property not reflected

June 3rd, 2014 Comments off

In SharePoint 2013 you can run into this issue: The top left corner of pages does not reflect changes to the ‘SuiteBarBrandingElementHtml’ Web Application property or is empty instead of showing default ‘SharePoint’ text.

In this article I’ll explain why this can be the case and how you can resolve or prevent it.
Read more…