Archive

Archive for the ‘Tips and Tricks’ Category

Documentum Dump and Load limitations

March 11th, 2015 Comments off

Lately I’ve been involved in a project where we used Documentum’s dump/load feature to copy a lot of documents from one repository to another. We successfully copied millions of documents, folders and other objects, but this success did not come easy. In this blog I would like to share some of the issues we had for the benefit of others using dump and load.

A standard tool

Dump and load is a tool that can be used to extract a set of objects from a Documentum repository into a dump file and load them into a different repository. Dump and load is part of the Documentum Content Server. This means it can be used with any Documentum repository in the world. The tool is documented in the Documentum Content Server Administration and Configuration Guide (find it here on the EMC Support site). The admin guide describes the basic operation of dump and load, but does not discuss its limitations. There is also a good Blue Fish article about dump and load that provides a bit more background.

A fragile tool

Dump and load only works under certain circumstances. Most importantly, the repository must be 100% consistent, or a dump will most likely fail. So my first tip: always run dm_clean, dm_consistencychecker and dm_stateofdocbase jobs before dumping and fix any inconsistencies found.

Dump Limitations

The dump tool has limitations. Dump can be instructed to dump a set of objects using a DQL query. The dump tool will run the query and dump all selected objects. It will also dump all objects that the selected objects reference. That includes the objects ACLs, folders, users, groups, formats, object types, etc.etc. This is done in an effort to guarantee that the configuration in the target repository will be ok for the objects to land. This feature causes a lot of trouble, especially when the target repository has already been configured with all the needed object types, formats, etc. It causes a 100 object dump to grow into a dump of thousands of objects, slowing the dump and load process. Worse, the dump tool will dump any objects that are referenced from the original objects by object ID. This causes the folder structure for the selected documents to be included as well as the content objects, but it can also cause other documents to be included, including everything that these documents reference (it it s recursive process). This method can backfire, for instance if you select audit trail objects for instance, all objects that they reference will be included in the dump.
Now this would not have been so bad if the dump tool had not had size limitations, but it does. We found for instance that it is impossible to dump a folder that has more than 20.000 objects in it (though your mileage may vary). The dump tool just fails at some point in the process. We discussed it with EMC Support and their response was that the tool has limitations that you need to live with.
As another example we came across a repository where a certain group had many supergroups. This group was a member of more than 10.000 other groups. This was also too much for the dump tool. Since this group was given permissions in most ACLs, it became impossible to do any dumps in that repository. In the end we created a preparation script that removed this group from the other groups and a post-dump script to restore the group relations.

Load Limitations

The load tool has its own limitations. Most importantly we found that the bigger the dump file, the slower the load. This means that a dump file with 200.000 objects will not load in twice the time it takes to load 100.000 objects, it will take longer. We found that in our client’s environment we really needed to keep the total object count of the dumps well below 1 million, or the load time would go from hours to days. We learned the hard way when we had a load fail after 30 hours and we needed to revert it and retry.
Secondly, objects may be included in multiple dump files, for instance when there are inter-document relations. For objects like folders and types this is fine, the load tool will see that the object already exists and skip it. Unfortunately this works differently for documents. If a document is present in 3 dump files, the target folder will hold 3 identical documents after they have been loaded. Since you have no control over what is included in a dump file and you cannot load partial dump files, there is little you can do to prevent these duplications. We’ve had to create de-duplication scripts to resolve this for our client. We also found that having duplicates can mean that the target docbase can have more documents than the source and that the file storage location or database can run out of space. So for our production migration we temporarily increased the storage space to prevent problems.
Another limitation concerns restarting of loads. When a load stops half way through, it can be restarted. However we have not seen any load finish successfully after a restart in our project. Instead it is better to revert a partial load and start it all over. Revert is much quicker than loading.
Finally we found that after loading, some meta data of the objects in the target repository was not as expected. For instance some fields containing object IDs still had IDs of the source repository in them and some had NULL IDs where there should have been a value. Again we wrote scripts to deal with this.

As a final advice I would encourage you to run all the regular consistency and cleaning jobs after finishing the loading process. This includes dm_consistencychecker, dm_clean, dm_filescan, dm_logpurge etc. This will clean up any stuff left behind by deleting duplicate documents and will ensure that the docbase is in a healthy state before it goes back into regular use.

As you may guess from this post, we had an exiting time in this project. There was a tight deadline, we had to work long hours, but we had a successful migration and I am proud of everyone involved.

If you want to know more, or want to share your own experience with dump and load, feel free to leave a comment or send me an email (info@informedconsulting.nl) or tweet (@SanderHendriks).

 

xCP2 NoSuchMethodError

February 18th, 2014 Comments off

Introduction

In xCP2.0 most configuration and development happens within Documentum xCP designer. However in some cases you must use Composer to create extra features for your Documentum application. In my case I was working on a demo application for one of our customers. A colleague of mine had created in Composer a Documentum project to create a set of test documents for his Documentum 6.7 environment, and I wanted to use these for my Documentum 7.0 environment.

This lead to several error messages when running the Composer project. The most important error message was “java.lang.NoSuchMethodError: com.emc.xcp.runtime.engine.handler”. To solve this error first I tried to search the internet for any insights into the problem. Sadly I was not able to find a solution. I did find several blogs which mentioned it is a known issue for the Documentum 7.0 environment and that it would be solved in the new release of xCP2, version 2.1. Secondly I found a blog which mentioned the used DFC of Documentum 7.0 and Composer 7.1 are different which could lead to these problems. From here on I tried to create a solution myself.
Read more…

Responsive HTML5 / CSS3.0 / LESS SP2010 Template Informed Consulting

April 22nd, 2013 Comments off

At Informed Consulting we use one template which contains our styling for multiple SharePoint Publishing Sites. The SharePoint 2007 (SP2007) template was updated a lot in the last couple of years, so it was nice to create a complete new, fresh template for SharePoint 2010 (SP2010).

Our new template for publishing websites in SP2010 contains preset styles for better browser compatibility, supports HTML5 and responsive themes, uses CSS3.0 without making it a mess, has a clear and open structure and is easy to adjust in future updates.

By building the template in the dynamic stylesheet language LESS, we can manage the template a lot easier and clearer, using parameters. We used various combinations of multiple free-to-use web frameworks in LESS, controlled in two chapters, the Template styling and the Theme styling that are described below.

The SP2010 template contains the following chapters:

  1. Dynamic Operations
  2. Reset Style sheets
  3. Optional template Functions
  4. Grid system (Semantic.gs)
  5. Frontend Framework collection Bootstrap
  6. Typographic Framework Baseline
  7. Template styling
  8. Theme styling
  9. Updates & Theme Media Queries

1. Dynamic Operations

On the lowest part of our LESS file we operate numbers, colours and variables so we can use the output all over the stylesheet. Like @default_TextColor, @default_Font, @var-default_LinkHoverColor.

2. Reset Stylesheet

We use multiple reset style sheets to make the websites browser compatible. A normal reset style for HTML4.1 en CSS2.1 was not enough. The reset styles were improved by adding some extra reset styles, one especially for the HTML5 elements, html5doctor.com reset styles (for IE9 and all older browsers).  And a reset stylesheet for resetting the font-size and colors of SP2010.

3. Optional Template functions

CSS3.0 does not improve the clear and open structure of the template, the length of the code was making it hard not make a mess. So I created a chapter filled with all the CSS3.0 large styling and created functions from it, so they could easy be used in other parts in the template. Some of these CSS3.0 elements are based on template parameters located in the chapter Template styling.

4. Grid system (Semantic.gs)

Since I don’t want to create multiple columns for every new theme, I use the calculation from the semantic grid system for the template. It calculates the width and behaviour of the high level containers and columns, easy to adjust by number, located in the Template parameters in the chapter Template styling.

5. Frontend Framework collection Bootstrap v2.2.2

The Frontend Framework collection of Bootstrap is used for the multiple components in the content area of SP2010. Sliders, buttons, tabs, dropdown, tooltip, forms, icons and even Webparts are given a new fresh look instantly when using the right classes.

6. Typographic Framework Baseline

Every theme has its own typography and I needed a good base to work with, easy adjustable in LESS, like the rest. I found a good typographic framework called Baseline that calculates the rules of the typography for us. The parameters for this calculation are located in the chapter Template styling.

7. Template styling

This is the most important chapter, and is where the basic website is being created. First the different solutions from the previous chapters are managed in this chapter by defining the template parameters: the Operations, the grid system, and the frontend and typographic frameworks. Second in place are the behaviours of the SharePoint core.css basic styling in combination with our template styling, and third the
basic website itself. Which is the enumeration of styling of all the possible elements of the SP2010 publishing site, and the styling of the basic WebParts used along with the structure.

8. Theme styling

The specific styling for the client theme is placed in this chapter. It starts with an enumeration of the template parameters that are being overruled. Then the high elements styling till detailed content styling are being created, with the help of the optional template CSS.3.0 functions, bootstrap build in styling and sometimes the SharePoint 2010 chart.

9. Updates & Theme Media Queries

The first part of the dynamic stylesheet can contain the very specific styling for the theme, updates and the media queries which allows the website to adapt to different window resolutions of multiple devices. The last part will not be used often, since SP2010 does not use Device Channels, which allows the creation of separate masterpages per device.

Conclusion

By building the template in the dynamic stylesheet language LESS I wanted to bridge the world between Design and Developing.  Although LESS is not yet fully utilized in the template, The LESS parts are the solution what I was looking for: controlling multiple elements and behaviour of different websites by parameters on one spot. Like the colours used on a website: I just have to define one color and the template will automatically calculate two good looking colours beside it (if I want to) and automatically use these for the elements in the website, like the navigation, header and footer and typographic blocks. This is especially for our demo websites, a quick solution.

Sandra Filius

05-Apr-13

Your new interface in …? D2? xCP? Both?

November 7th, 2012 Comments off

Going through the Retention Policy Services class at Momentum 2012 in Vienna, I could not keep from thinking of a new interface. Why? I’ve seen so much of D2 and xCP during the past days that the new user interfaces and the new way of solution building, have become to norm for me. Although brand new, this is what customers expected for a long time.

Going through the class I realized that such is not an easy thing. It’s all integrated into Webtop. Not being a records manager, I can be wrong, but it seems as if there is a mismatch between how the tool is designed and the way records management is organized. It seems tool driven rather than process driven. Just for the sake of this blog, let’s assume that my feeling is spot on.

The question is: how would one recreate this? Using D2? Using xCP? Using both?

The easy answer is: it depends. It depends on goals, objectives, budget, time, resources… Foremost it depends on the business requirements and use cases.

Recreating the RPS interface should be driven from the requirements that tell us what the user needs to have in order to do his work. One of the constraints however will be that recreating a user interface should not lead to large changes to the back-end. Only then we will be successful given the time and money spent by the companies to implement a records management solution and in some cases have that validated.

Won’t we do that, we would end up with a clone of the current interface in D2. Possible, but in my opinion a missed opportunity.

Strangely (is it?) enough I believe the answer should be 2 separate solutions. One for the average user and one for the records manager.

The one for the average user is needed because he works with documents and needs to apply a policy every now and then or promote a document to a record. Yet, I hope most of this is done automatically. In those cases that human intervention is needed, the functionality will be available through D2 solutions that exists for that average user. Not an RPS solution.

The other one is needed for the records manager. Records Management is a structured process with unstructured data. Such processes are to be implemented through xCP.

The question that is addressed above is a typical question for all current user interfaces/solutions that rely on Webtop. What will be the replacement? Something in D2? Something in xCP? Something in both? There is no single answer. Each case must be validated on its own. And unlike the above, factors like time, resources and money may do influence that choice. I strongly advice however, to make the choice first without looking into these 3 spoilers. Make sure that you take the conscious decision to cut corners for time, resource of money sake.

D2, a hammer, but is everything a nail?

November 7th, 2012 Comments off

As Informed Consulting we believe in the individual employee that needs to be connected to the enterprise.
The individual has become important and will increasingly become more important.
Today employees are a mixture of people that grew up without PC’s and people for whom always on-line is like breathing: you can’t do without.
Our live has changed. Our expectations of the organisations have changed. Bring your own device. Choose your own tool.

From the needs of the enterprise this looks completely different.
Control. Compliance. Structure. Successful ECM solutions typically meet these two needs roughly halfway.

Meeting both needs halfway needs more than just the good old Documentum Content Server and Webtop. We see the combination of SharePoint and Documentum — connected through SDF — as a common solution.
But what if — right or wrong — the customer doesn’t want SharePoint in their IT landscape? Is D2 a product that could fill that gap? Can we SharePointize D2?

Yesterday was election day in the USA so the applicable answer is: Yes we can!
But like these elections, it’s a close call.

More importantly, it depends on the context of your collaboration.
If it is just document handling and providing ‘info’ widgets next to it, there is a significant overlap between SharePoint and D2. Later, when D2 will add full 2–way communication in the widgets, it will even get closer.
If your collaboration is also around discussions, contact lists, meeting agenda’s, and all those (sort of) content-less objects, it becomes a different story.

The question then becomes: will you create all those missing features somehow in a Documentum back-end? I think — although technically possible — you shouldn’t. Once you have a hammer, not everything is a nail.

To avoid this pitfall, you must think carefully before you act. Ideally, even before you choose the solution!
D2 to some extend reminds me of the late 80’s with interfaces on top of databases.
We’ve come a long way since and learned some lessons.
One is to do your application analyses very well. Get all requirements. Make your use cases. Do your interaction designs. Then choose your solution.

D2 Application Building

November 6th, 2012 Comments off

At Momentum 2012 in Vienna there are 3 numbers that pull the attraction: 2, 4 and 7.
Although the target is still the New Normal (Peter Hinsen), the user that mixes work and private in a 24/7 (see the numbers…) economy, it refers to the 3 major products of EMC. It’s xCP 2.0, D2 4.0 and Captiva 7.

Side note: for those that linked 7 to the new Documentum stack, which has reached version 7.0, I must admit that it is tempting to do so. However, that stack sits underneath xCP and D2 and I believe that it will be a matter of time before it becomes irrelevant for the normal user.

From the 3, D2 is the one that is of particular interest. With version 4.0 being available (4.1 is due early next year), demo’s, the tutorial and the hands-on lab all show one thing: this is the foundation for all user-interfaces to come. It will be very simple and tempting to configure a Webtop clone using D2 or in the future, replace e.g. TaskSpace with the paradigm of D2.

The question is: should you configure that Webtop clone in D2 or not?

I believe you should not. D2 is the tool-set that allows you to configure whatever (within limitations) interface you need. Or better put: the interface the new user needs. The business user.

All of a sudden we’re no longer tweaking an interface (Webtop) to meet the business user half way. No, were creating a specific interface for a group of business users to do their work. Doing so also means that it can no longer be the average Documentum consultant — you know that technical guy or girl that eat and drinks DFS, Content Types, ACL’s — but you do need to bring a different consultant to the table.

You will need a skilled user experience consultant to sit with the business user and have her working towards an interaction design for the solution. Only then you’ll be able to deliver the solutions that the business needs. At Informed Consulting we’re glad to have that expertise already at hand. It’s more common in a SharePoint world and for us as a C3P partner that is EMC’s go-to-partner for SDF — SharePoint Documentum Framework — we’ve seen the challenge of creating a bridge between the business and IT tools. We’ve seen the risk of creating a language barrier by putting the Documentum guru next to the business guru. We’ve seen that creating a design document is not enough. Most business user find it hard to visualize from words. It goes without saying that a picture paints a thousand words.

It’s here were the user experience consultant steps in. Not only for retrieving better requirements, but also for creating mock-ups, screen layouts and other visuals to validate that the needs of the business are fully understood before we’re starting to configure the application.

So, to tame the beast of D2, take care of your application analysis first.

Leveraging Azure Marketplace Data in SharePoint Part 1: Consuming Azure Marketplace Data in BCS

February 2nd, 2012 Comments off

In this series of posts:

  • Part 1: Consuming Azure Marketplace Data in BCS (this article).
  • Part 2: Using the Secure Store Service for Azure Marketplace Authentication in BCS.

Windows Azure Marketplace data is a service by Microsoft that hosts WCF services in Windows Azure. Organizations and individuals can consume that data via a subscription model. These services expose data using REST services which can be leveraged in SharePoint using BCS.

For this example we are going to use US Crime data statistics Service (DATA.gov). By using BCS we can consume the Azure WCF Service and display this data in SharePoint via an External List.

For achieving the above we are going to take the following steps:

  • Create an Azure Marketplace account and consume the data.
  • Create a Custom .Net Connector to leverage this data in BCS.
  • Use the Secure Store Service for Azure Marketplace authentication (part 2).

In the first part of this series we are going to register for an Azure Marketplace account so we can subscribe to a service. After this, we are going to create a BCS Custom .Net Connector for adding that data to SharePoint’s BCS. In the next part of this series we are going to use Secure Store Service for Azure Marketplace Authentication.

Azure Marketplace Data

To get started navigate to https://datamarket.azure.com/ and register for an account by using your Windows Live ID. Click the Windows Live Sign in the upper right corner, add your information, accept the license agreement and click register. Get a developer account, search for the US Crime Data Statistics Service and add it to your account. (some data sets cost money, so be aware). After you found the data feed click on it for more details. Then click the Sign Up button on the right. After this the data feed will be added to your account. Click the “My Account” button in the top navigation and click on “My Data” in the left navigation. You will see the newly added subscription on the page. Click on the title of the service which sends you to the details page of the Crime Service. Click on “Explore the dataset”. A new window is opened and here you can filter the service data using the web browser. Add “Washington” to the “City” Textbox and click on “Run Query”.

  • Click on the “Develop” Button next to the Build Query window. This URL contains the address of the service together with the filter we’ve added earlier in the Query Box. You can use the whole URL if you like but you can also use the root service URL and filter the data using LINQ in the custom .Net Connector. At the top of the screen locate the Service Root URL and copy it.

Create a Custom .Net Connector for Connecting to the Azure Service

After registering for an Azure Marketplace account we are going to create a custom .Net Connector to connect the data feed with SharePoint. The build in WCF connector is not suitable for this scenario because the marketplace feed expects the developer key for consuming the service. So in this case a custom connector needs to be developed using Visual Studio.

For this example we are going to create a .Net Assembly Connector. This type of connector is used when the External System Schema is fixed, like the data schema of the Crime data feed.

  • Open Visual Studio and create a new project.
  • Choose the “Business Data Connectivity Model” as the project type. Call it “USCrimeDataConnector” (or call it anything you like) and click “OK”.

  • Choose the SharePoint Server URL on which you’re going to debug and click Finish.
  • Rename the default BDCModel and call it “CrimeDataModel”.
  • We start by creating an External List for the Azure Crime Data. Right click the existing Entity1 and select Delete.
  • Select Entity1.cs and EntityService1.cs in the Solution Explorer and delete them.
  • Right click the canvas and select Add -> Entity. Right click the new Entity and select Properties. In the properties window set the Name to CrimeData.
  • Right click the CrimeData entity and select Add -> Identifier.
  • Select the Identifier and set the Name to Id using the Properties Window.
  • Add a ReadList method to the CrimeData Entity. Right click the CrimeData Entity and select Add -> Method. Rename the method to ReadList. In the BDC Method Details pane locate the ReadList Method and expand its parameters. Click the dropdown in <Add a Parameter> and choose Create Parameter. Set the following properties in the properties window:
    • Name to ReturnParameter
    • ParameterDirection to Return.


     

  • In the BDC Method Details pane locate the Instances node, select <Add a Method Instance> and choose Create Finder Instance. Set the following properties in the Properties Window:
    • Name to ReadList
    • Default to True
    • DefaultDisplay Name to Read List
    • Return Parameter name to returnParameter.

     

  • Open the BDC Explorer Window, expand the ReadList message and select the returnParameterTypeDescriptor. Set the following properties in the Properties Window:
    • Name = CrimeDataList
    • TypeName = System.Collections.Generic.IEnumerable`1[[USCrimeDataConnector.CrimeDataModel.CrimeData, CrimeDataModel]]
    • IsCollection = True.
  • In the BDC Explorer, right click CrimeDataList and select Add Type Descriptor. Set the following properties in the Properties Window:
    • Name = CrimeData
    • TypeName = USCrimeDataConnector.CrimeDataModel.CrimeData, CrimeDataModel.
  • In the BDC Explorer, right click CrimeData and select Add Type Descriptor. Set the following properties in the Properties Window:
    • Name = Id
    • TypeName = System.Int32
    • Identifier = Id.
  • Add 3 more type descriptors and set the following properties (same as above):
    • Name = City
    • TypeName = System.String
    • Name = State
    • TypeName = System.String
    • Name = Year
    • TypeName = System.Int32
  • The next step is to define the ReadItem method. Right click the CrimeData Enitity in the canvas and select Add -> Method. Rename the method to ReadItem.
  • Switch to the BDC Method Details Pane and select the ReadItem node. Click the dropdown in <Add a Parameter> and choose Create Parameter. Set the following properties in the properties window:
    • Name = ReturnParameter
    • ParameterDirection = Return.
  • Add another parameter and set the following properties:
    • Name = Id
    • ParameterDirection = In.
  • In the ReadItem method instances node add a new Create Finder instance and set the following properties:
    • Name = ReadItem.
    • Type = Specific Finder
    • Default = True
    • DefaultDisplayName = ReadItem
    • Return Parameter = ReturnParameter

  • In the BDC Explorer Window locate the ReadItem parameters and expand them both.
  • Select idTypeDescriptor under the ReadItem’s id parameter and set the following values in the Properties window:
    • Name = CrimeDataId.
    • TypeName = System. Int32.
    • Identifier = Id.
  • Right Click CrimeData under ReadList -> ReturnParameter -> CrimeDataList -> CrimeData and select Copy.
  • Right Click ReturnParameter under ReadItem and select Paste.

  • Click Yes.
  • Locate the Model and rename it from BDCModel1 to CrimeDataModel. Repeat this for the LobSystem and the LobSystemInstance.
  • the BDC Explorer Window will look like the following figure:

  • The BDC Model is ready. The next step is adding the Azure Marketplace Service Reference. Switch to the Solution Explorer and a Service Reference.
  • Add the Azure Marketplace URL to the Address box and Call the Service CrimeDataServiceReference. Click OK.

  • Switch back to the Solution Explorer and add a new class to the project. Call it CrimeData.
  • Add the following code to the CrimeData class:

public class CrimeData {
 public int Id { get; set; }
 public string City { get; set; }
 public string State { get; set; }
 public int Year { get; set; }

}

  • Add a new class to the project and call it CrimeDataService. Add the following code to the CrimeDataService class:

public partial class CrimeDataService {

private string _url = "https://api.datamarket.azure.com/data.gov/Crimes/";

private string _liveID = "{Your LiveID}";

private string _accountID = "{Your AccountKey}";

private static CrimeDataServiceReference.datagovCrimesContainer _context;

public CrimeDataService() {

_context = new CrimeDataServiceReference.datagovCrimesContainer(new Uri(_url))  Credentials = new NetworkCredential(_liveID, _accountID)

};

/// The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.

/// ---> System.Security.Authentication.AuthenticationException: The remote certificate is invalid according to the validation procedure.

ServicePointManager.ServerCertificateValidationCallback += (sender1, certificate, chain, sslPolicyErrors) => true;

}

public static IEnumerable<CrimeData> ReadList() {

try {

var crimeData = (from c in _context.CityCrime

where c.City == "Washington"

select new CrimeData {

Id = c.ROWID,

City = c.City,

State = c.State,

Year = c.Year

}).ToList();

return crimeData;

} catch (Exception ex) {

SPDiagnosticsService.Local.WriteTrace(0, new SPDiagnosticsCategory("Azure BCS connector: failed to fetch read list", TraceSeverity.Unexpected, EventSeverity.Error), TraceSeverity.Unexpected, ex.Message, ex.StackTrace);

}

return null;

}

&amp;amp;amp;amp;nbsp;

public static CrimeData ReadItem(int Id) {

try {

var item = _context.CityCrime.Where(x => x.ROWID == Id).ToList().First();

var crimeData = new CrimeData {

Id = item.ROWID,

City = item.City,

State = item.State,

Year = item.Year

};

return crimeData;

} catch (Exception ex) {

SPDiagnosticsService.Local.WriteTrace(0, new SPDiagnosticsCategory("Azure BCS connector: failed to fetch read item", TraceSeverity.Unexpected, EventSeverity.Error), TraceSeverity.Unexpected, ex.Message, ex.StackTrace);

}

return null;

}

  • Press F5;
  • After deploying the external Content Type we first need to set the permissions in the BDC Service Application. Browse to Central Administration. Go to Application Management -> Service Applications and click the BDC Service application. Select the CrimeData ECT and click Set Object Permissions.
  • Add yourself and assign all the permissions.

  • Next is creating an external list for the CrimeData ECT. Creating an external list can be done by using SharePoint Designer or the browser. We will use the browser for this sample.
  • Browse to the SharePoint site, click on Site Actions -> View All Site Content -> Create.
  • Choose External List and Click Create.

  • Name the List CrimeData, click on select Select External Content Type and choose the CrimeData external content type from the dialog. Click the Create button.

  • After creating the External list verify that the Azure Marketplace CrimeData is visible in the page.

  • Click on one of the list items to see the details.

The source code for this post can be downloaded here.

DCTM Tip: job polling

January 2nd, 2012 2 comments

Something has been annoying me and I finally took some time to look it up. It thought I’d spare you all the hassle and share what I found here.

Problem at hand: whenever you start a DCTM job in a repository at a client, it will take 5 to 10 minutes before the job is actually started. That is annoying, especially when you are testing a custom job. I looked around in the usual places, but found nothing. Then I found the following in the DCTM Server Admin guide:

Setting the polling interval
The agent exec process runs continuously, polling the repository at specified intervals for jobs to execute. To change the polling interval add the -override_sleep_duration argument with the desired value to the agent_exec_method command line. Use Documentum Administrator to add the argument to the command line.
For example:
.\dm_agent_exec -override_sleep_duration 120
The polling interval value is expressed in seconds (120 is 2 minutes expressed as seconds). The minimum value is 1 second.
Bonus: you can also set the number of jobs that will be run in 1 polling interval:
Setting the number of jobs in a polling cycle
By default, the agent exec executes up to three jobs in a polling cycle. To change the maximum number of jobs that can run in a polling cycle, add the
-max_concurrent_jobs argument with the desired value to the agent_exec_method method command line.
For example:
.\dm_agent_exec -max_concurrent_jobs 5
Use Documentum Administrator to modify the command line.
Sander Hendricks

ECM Consultant

Path problems with the rich text editor

December 19th, 2011 Comments off

The rich text editor within SharePoint will often cause some problems. Except the fact that the created HTML code isn’t always that neat, it is also nearly impossible to use relative paths in your links or images since it will always include the domain you’re currently on in your link. This of course is really annoying, especially when you have a separate domain for editing your content.

However, a workaround is possible which solves your problem. With the help of JavaScript it is possible to change the value of all links on your page. By making sure this piece of JavaScript is included on every page you will never have the problem of accidently creating dead links anymore.

JQuery is a very popular JavaScript framework which is already installed on a lot of sites. Most of the time it makes JavaScript scripting a lot easier and less time consuming. You can use the next piece of script to change all links and images so that the right domain name is being set.


if (document.domain == 'www.publicurl.com')
{
 	$("a[href^='https://www.contenturl.com']") .each(function()
	{
		this.href = this.href.replace(/^https:\/\/www.\.contenturl\.com/, "http://www.publicurl.com");
	});
	$("img[src^='https://www.contenturl.com']") .each(function()
	{
		this.src = this.src.replace(/^https:\/\/www.\.contenturl\.com/, "http://www.publicurl.com");
	});
}

In the first line:
if (document.domain == ‘www.publicurl.com’)
a check is being done to make sure we are on the public domain at the moment. You don’t want the links to be changed when you are editing the content.
Next, all links are being searched which contain the value “https://www.contenturl.com”. After that a function is begin executed which changes the href of the link from “https://www.contenturl.com” to “http://www.publicurl.com”.

Exactly the same is being done for all images on the site where the “src” of all the “img” tags are being altered.

If JQuery is not already installed and difficult to install it is also possible to use ‘normal’ javascript for this. In this case it isn’t even that much more work to write.


if (document.domain == 'www.publicurl.com')
{
	for (i=0; i &lt; document.links.length; i++)
	{
		document.links[i].href = document.links[i].href.replace("https://www.contenturl.com",";http://www.publicurl.com");
	}
	for (i=0; i &lt; document.images.length; i++)
	{
		document.images[i].src = document.images[i].src.replace("https://www.contenturl.com","http://www.publicurl.com");
	}
}

This does exactly the same as the script above. It will find all links and images that contain “https://www.contenturl.com” and replaces this with “http://www.publicurl.com”.

Finding DFC javadocs

October 26th, 2011 1 comment

Today I needed to do some programming against the Documentum API to add some custom logic to a workflow. Now the object model for Documentum workflow is somewhat complicated. It has process objects, workflow objects, work items, packages and attachments to name a few. So I thought it would be helpful to have a look at the DFC javadocs to get some info on the methods I needed to call.

Unfortunately I had no javadoc of a recent Documentum version at hand. So I went on a documentation search:

  • I searched Powerlink and easily found javadocs for DFC 4i
  • I searched EMC Developer Network, but again only old stuff and messages of people looking for the same documentation
  • In the end I turned to the good old Download Center and there is was, in the section Documentum Foundation classes

TIP:  for people with no or limited access to the Download Center: I later found out that a copy of the DFC 6.6 javadocs is actually shipped with the xCP 1.5 Information Center. It’s in the plugins directory a file called com.emc.documentum.foundation.classes.javadoc_6.6.0.201009171023.jar.

If you unjar this file, you will get the DFC javadoc documentation.