Archive

Posts Tagged ‘SharePoint 2010’

Leveraging Azure Marketplace Data in SharePoint Part 1: Consuming Azure Marketplace Data in BCS

February 2nd, 2012 Comments off

In this series of posts:

  • Part 1: Consuming Azure Marketplace Data in BCS (this article).
  • Part 2: Using the Secure Store Service for Azure Marketplace Authentication in BCS.

Windows Azure Marketplace data is a service by Microsoft that hosts WCF services in Windows Azure. Organizations and individuals can consume that data via a subscription model. These services expose data using REST services which can be leveraged in SharePoint using BCS.

For this example we are going to use US Crime data statistics Service (DATA.gov). By using BCS we can consume the Azure WCF Service and display this data in SharePoint via an External List.

For achieving the above we are going to take the following steps:

  • Create an Azure Marketplace account and consume the data.
  • Create a Custom .Net Connector to leverage this data in BCS.
  • Use the Secure Store Service for Azure Marketplace authentication (part 2).

In the first part of this series we are going to register for an Azure Marketplace account so we can subscribe to a service. After this, we are going to create a BCS Custom .Net Connector for adding that data to SharePoint’s BCS. In the next part of this series we are going to use Secure Store Service for Azure Marketplace Authentication.

Azure Marketplace Data

To get started navigate to https://datamarket.azure.com/ and register for an account by using your Windows Live ID. Click the Windows Live Sign in the upper right corner, add your information, accept the license agreement and click register. Get a developer account, search for the US Crime Data Statistics Service and add it to your account. (some data sets cost money, so be aware). After you found the data feed click on it for more details. Then click the Sign Up button on the right. After this the data feed will be added to your account. Click the “My Account” button in the top navigation and click on “My Data” in the left navigation. You will see the newly added subscription on the page. Click on the title of the service which sends you to the details page of the Crime Service. Click on “Explore the dataset”. A new window is opened and here you can filter the service data using the web browser. Add “Washington” to the “City” Textbox and click on “Run Query”.

  • Click on the “Develop” Button next to the Build Query window. This URL contains the address of the service together with the filter we’ve added earlier in the Query Box. You can use the whole URL if you like but you can also use the root service URL and filter the data using LINQ in the custom .Net Connector. At the top of the screen locate the Service Root URL and copy it.

Create a Custom .Net Connector for Connecting to the Azure Service

After registering for an Azure Marketplace account we are going to create a custom .Net Connector to connect the data feed with SharePoint. The build in WCF connector is not suitable for this scenario because the marketplace feed expects the developer key for consuming the service. So in this case a custom connector needs to be developed using Visual Studio.

For this example we are going to create a .Net Assembly Connector. This type of connector is used when the External System Schema is fixed, like the data schema of the Crime data feed.

  • Open Visual Studio and create a new project.
  • Choose the “Business Data Connectivity Model” as the project type. Call it “USCrimeDataConnector” (or call it anything you like) and click “OK”.

  • Choose the SharePoint Server URL on which you’re going to debug and click Finish.
  • Rename the default BDCModel and call it “CrimeDataModel”.
  • We start by creating an External List for the Azure Crime Data. Right click the existing Entity1 and select Delete.
  • Select Entity1.cs and EntityService1.cs in the Solution Explorer and delete them.
  • Right click the canvas and select Add -> Entity. Right click the new Entity and select Properties. In the properties window set the Name to CrimeData.
  • Right click the CrimeData entity and select Add -> Identifier.
  • Select the Identifier and set the Name to Id using the Properties Window.
  • Add a ReadList method to the CrimeData Entity. Right click the CrimeData Entity and select Add -> Method. Rename the method to ReadList. In the BDC Method Details pane locate the ReadList Method and expand its parameters. Click the dropdown in <Add a Parameter> and choose Create Parameter. Set the following properties in the properties window:
    • Name to ReturnParameter
    • ParameterDirection to Return.


     

  • In the BDC Method Details pane locate the Instances node, select <Add a Method Instance> and choose Create Finder Instance. Set the following properties in the Properties Window:
    • Name to ReadList
    • Default to True
    • DefaultDisplay Name to Read List
    • Return Parameter name to returnParameter.

     

  • Open the BDC Explorer Window, expand the ReadList message and select the returnParameterTypeDescriptor. Set the following properties in the Properties Window:
    • Name = CrimeDataList
    • TypeName = System.Collections.Generic.IEnumerable`1[[USCrimeDataConnector.CrimeDataModel.CrimeData, CrimeDataModel]]
    • IsCollection = True.
  • In the BDC Explorer, right click CrimeDataList and select Add Type Descriptor. Set the following properties in the Properties Window:
    • Name = CrimeData
    • TypeName = USCrimeDataConnector.CrimeDataModel.CrimeData, CrimeDataModel.
  • In the BDC Explorer, right click CrimeData and select Add Type Descriptor. Set the following properties in the Properties Window:
    • Name = Id
    • TypeName = System.Int32
    • Identifier = Id.
  • Add 3 more type descriptors and set the following properties (same as above):
    • Name = City
    • TypeName = System.String
    • Name = State
    • TypeName = System.String
    • Name = Year
    • TypeName = System.Int32
  • The next step is to define the ReadItem method. Right click the CrimeData Enitity in the canvas and select Add -> Method. Rename the method to ReadItem.
  • Switch to the BDC Method Details Pane and select the ReadItem node. Click the dropdown in <Add a Parameter> and choose Create Parameter. Set the following properties in the properties window:
    • Name = ReturnParameter
    • ParameterDirection = Return.
  • Add another parameter and set the following properties:
    • Name = Id
    • ParameterDirection = In.
  • In the ReadItem method instances node add a new Create Finder instance and set the following properties:
    • Name = ReadItem.
    • Type = Specific Finder
    • Default = True
    • DefaultDisplayName = ReadItem
    • Return Parameter = ReturnParameter

  • In the BDC Explorer Window locate the ReadItem parameters and expand them both.
  • Select idTypeDescriptor under the ReadItem’s id parameter and set the following values in the Properties window:
    • Name = CrimeDataId.
    • TypeName = System. Int32.
    • Identifier = Id.
  • Right Click CrimeData under ReadList -> ReturnParameter -> CrimeDataList -> CrimeData and select Copy.
  • Right Click ReturnParameter under ReadItem and select Paste.

  • Click Yes.
  • Locate the Model and rename it from BDCModel1 to CrimeDataModel. Repeat this for the LobSystem and the LobSystemInstance.
  • the BDC Explorer Window will look like the following figure:

  • The BDC Model is ready. The next step is adding the Azure Marketplace Service Reference. Switch to the Solution Explorer and a Service Reference.
  • Add the Azure Marketplace URL to the Address box and Call the Service CrimeDataServiceReference. Click OK.

  • Switch back to the Solution Explorer and add a new class to the project. Call it CrimeData.
  • Add the following code to the CrimeData class:

public class CrimeData {
 public int Id { get; set; }
 public string City { get; set; }
 public string State { get; set; }
 public int Year { get; set; }

}

  • Add a new class to the project and call it CrimeDataService. Add the following code to the CrimeDataService class:

public partial class CrimeDataService {

private string _url = "https://api.datamarket.azure.com/data.gov/Crimes/";

private string _liveID = "{Your LiveID}";

private string _accountID = "{Your AccountKey}";

private static CrimeDataServiceReference.datagovCrimesContainer _context;

public CrimeDataService() {

_context = new CrimeDataServiceReference.datagovCrimesContainer(new Uri(_url))  Credentials = new NetworkCredential(_liveID, _accountID)

};

/// The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.

/// ---> System.Security.Authentication.AuthenticationException: The remote certificate is invalid according to the validation procedure.

ServicePointManager.ServerCertificateValidationCallback += (sender1, certificate, chain, sslPolicyErrors) => true;

}

public static IEnumerable<CrimeData> ReadList() {

try {

var crimeData = (from c in _context.CityCrime

where c.City == "Washington"

select new CrimeData {

Id = c.ROWID,

City = c.City,

State = c.State,

Year = c.Year

}).ToList();

return crimeData;

} catch (Exception ex) {

SPDiagnosticsService.Local.WriteTrace(0, new SPDiagnosticsCategory("Azure BCS connector: failed to fetch read list", TraceSeverity.Unexpected, EventSeverity.Error), TraceSeverity.Unexpected, ex.Message, ex.StackTrace);

}

return null;

}

&amp;amp;amp;amp;nbsp;

public static CrimeData ReadItem(int Id) {

try {

var item = _context.CityCrime.Where(x => x.ROWID == Id).ToList().First();

var crimeData = new CrimeData {

Id = item.ROWID,

City = item.City,

State = item.State,

Year = item.Year

};

return crimeData;

} catch (Exception ex) {

SPDiagnosticsService.Local.WriteTrace(0, new SPDiagnosticsCategory("Azure BCS connector: failed to fetch read item", TraceSeverity.Unexpected, EventSeverity.Error), TraceSeverity.Unexpected, ex.Message, ex.StackTrace);

}

return null;

}

  • Press F5;
  • After deploying the external Content Type we first need to set the permissions in the BDC Service Application. Browse to Central Administration. Go to Application Management -> Service Applications and click the BDC Service application. Select the CrimeData ECT and click Set Object Permissions.
  • Add yourself and assign all the permissions.

  • Next is creating an external list for the CrimeData ECT. Creating an external list can be done by using SharePoint Designer or the browser. We will use the browser for this sample.
  • Browse to the SharePoint site, click on Site Actions -> View All Site Content -> Create.
  • Choose External List and Click Create.

  • Name the List CrimeData, click on select Select External Content Type and choose the CrimeData external content type from the dialog. Click the Create button.

  • After creating the External list verify that the Azure Marketplace CrimeData is visible in the page.

  • Click on one of the list items to see the details.

The source code for this post can be downloaded here.

Implementing Documentum Repository Services for Microsoft SharePoint

August 15th, 2011 Comments off

On a current project we’re integrating Documentum 6.5 and SharePoint 2010 through the EMC product EDRSMS 6.6, which is short for EMC Documentum Repository Services for Microsoft SharePoint. A mouthful, in the project we just call it Repository Services.

As part of the project, we’ve been educating the client project members on how this integration works. In this blog post, we’ll share some of that. That includes thoughts on how to make it work in a global implementation of SharePoint that is connected to a central instance of Documentum.

In itself that is not such a rare situation. Typically, the Documentum content server is implemented after careful considerations in a central location and additional measures can be taken to speed things up on the global hubs. One of them can be using Branch Office Caching Servers (BOCS). The problem is that BOCS can’t be used as part of the Repository Services solution. This doesn’t make sense in the typical usage scenario: SharePoint users work in and with SharePoint and somewhere down deep, data is not stored in a SharePoint Content Database (as part of the SQL server instance or cluster) but cached and partially forwarded into Documentum. More on this later.

With SharePoint the stories varies a little more. Some say it grows organically from a departmental implementation to corporate level. Some start with a central corporate instance and spin out to all regions. The stories about well thought designs on farms, site collections, and content types somehow don’t hit the surface that I see. Maybe I’m looking in the wrong direction. Then again, from talking to colleagues, project members and others in the Documentum-SharePoint playground, it appears as if this is an after-thought that is getting more and more attention. Well the point is that when you start using Repository Services, you better think beforehand!

 

Why do you have to think carefully about the SharePoint farms when using Repository Services?

 

Imagine that you need content from SharePoint that has been journaled (meaning: moved) into Documentum and is no longer directly available to SharePoint. Repository Services takes care of retrieving that content from Documentum and making it available to SharePoint. It does so by putting it in a temporary cache. This temporary cache is a location in the file system that only exists at farm level.

Meaning: for the whole farm, there is just one (1).

Now imagine a farm located in New York with also users in Sydney, Bangalore, London and Chicago. They probably don’t care since they’re on the global high performance network between these hubs. Imagine users in Wellington (NZ), Vancouver and Madrid. They might not be such happy users because their content needs to come from down town New York even if that content is only created and used by them.

Of course, more factors apply, but the key is: think about multiple farms when contents needs to travel around the global where users are only working on it in a single country or region.

 

Why do you have to think carefully about the SharePoint Site Collections when using Repository Services?

 

There are two simple reasons:
1.The current version of Repository Services is granular up the level of a Site Collection. This means that either a complete Site Collection is under control of Repository Services or not. The effect is that even if you only need the content of one single site to be journaled into Documentum, all content from the Site Collection is moved out of the SharePoint Content Database. The content that you don’t need to be journaled into Documentum is kept in the Performance Cache, a location in the file system.
2.There is only a single Performance Cache for each Site Collection

Again, imagine the global usage scenario as explained before. SharePoint allows you to have multiple Content Databases and with some careful tweaking, you can force a Site Collection to be stored in a particular Content Database. As part of a SQL Server cluster, you could bring the content near the user by creating regional Content Databases.

With Repository Services, the content is not stored in the regional Content Database but in the Performance Cache of the Site Collection, a location in the file system that can be anywhere on the globe.

The key is, that like creating multiple farms, you have to consider using multiple Site Collections and decide which sites – and thus which user population – becomes part of which Site Collection. Creating regional Site Collections, allows you to have a regional Performance Cache and keep the content as close to the user as possible.

Another key consideration: put content that can live on its own, and doesn’t need to be under control of repository Services, in a separate Site Collection.

This may have some implications on the user experience (user have to work with distinct Site Collections) but you also may not be too keen on large volumes of content in the Performance Cache that will never be journaled into Documentum.

It’s not a matter of right or wrong. It’s a matter of taking a well thought decision.

 

Why do you have to think carefully about the SharePoint content types

 

By definition (MSDN), a content type is a reusable collection of metadata (columns), workflow, behavior, and other settings for a category of items or documents in a Microsoft SharePoint Foundation 2010 list or document library. Content types enable you to manage the settings for a category of information in a centralized, reusable way.

When working with Repository Services, the metadata bit is what we’re looking for.

This may sound trivial, but way too often, modifications to a document library in SharePoint, are not based on the definition of content types. As part of good practice in Document Management, you first think about the information (attributes, properties, metadata – you name it) you need to know about content as part of your data model, before you think about how it is presented in your presentation layer, in this case a SharePoint document library.

Key is, to think from a document management mindset.

As said, with Repository Services, content is journaled into Documentum. This is done via Journaling rules. These rules select content based on… metadata values! So, even though all content of a Site Collection, is moved into the Performance Cache, only the content that is selected through a Journaling rule, is moved into Documentum. It goes without saying that the more metadata is available to select from, the more granular content can be selected for journaling.

Next to the actual content, a convenience copy of the metadata in XML format is also stored in Documentum. Although this is just a copy (changes are not propagated back into SharePoint), it can be used for further actions inside Documentum. Think of a scenario where you need to put content under Records Management. The basic metadata that resides as attributes on the Documentum doctype for this journaled content is limited and maybe not enough to make the proper RM decisions.
Key is to consider the usage of the content way beyond it has left SharePoint and reflect those requirements in the SharePoint content type.

In summary.

Using Repository Services to bridge the SharePoint and Documentum worlds can give you great user experience (SharePoint) and great document management (Documentum). It can also give you headaches if you forget about document management core values and information architecture design.

And if you have such a headache… there is aspirin available.

Ed Steenhoek

ECM Solution Principal

SharePoint and Documentum competitors or partners in crime?

June 5th, 2010 Comments off

After a long but very interesting week of technical SharePoint sessions of devconn and the week before a long week of super sessions at Momentum it is time to make up my mind.
Should SharePoint and Documentum be seen as competitors or is the combination of the two a very good example of: the sum is greater than its parts.

1) Documentum: This is still the number one if you really want to control and manage your documents on an enterprise level (this has only little to do with technical scaling and high availability/failover) If you look for compliance support or need enterprise content management this is the best of the breed. Interfacing for knowledge workers who use word, excel project or PowerPoint is improved but still needs work to be accepted. The possibilities to really do collaboration or add social computing to your work environment is not possible.

2) SharePoint. With their new ribbon they are setting a new standard for interfacing with the desktop and editing applications like word, excel, project and also access, InfoPath and Visio. The possibilities of SharePoint to collaborate, use your personal social computing in a working environment, use tables and list to manage data (it does not matter if it is structured or unstructured) is the best ever. Creating good structure within your unstructured data, setting up clear version, enterprise security and content lifecycle management is difficult and still a long way of, of being enterprise ready. Workflow in SharePoint is still immature an if used widely unmanageable. Also setting up an enterprise ready solution in SharePoint is not possible to make it maintainable.

But the two combined: WOH,DOUBLE WOH. With the two integration modules of Documentum, the power of business process management, lifecycle management, records management and the new case management options seamlessly integrated with the interfacing and social computing power of SharePoint gives an enterprise finaly the possibility to create solutions with an exceptional good user experience and meet the current compliancy and enterprise demands.

But how will it work?

In my perfect (well let say a bit more perfect) information management world an enterprise will setup SharePoint in a small silo-ed approach. Technically it can be centralized, high avail etc.., but to make it functional maintainable you need to chunk in up in smaller functional pieces. In this users are able to really collaborate with the colleagues they work with. This interface will make the end-users happy and encourage them to make more use of ECM tools. Underneath this, your enterprise will need Documentum to guarantee a real enterprise and compliant ready solution. When documents/content needs to be managed on an enterprise level, according to rules and regulations, you can setup your SharePoint environment to automatically or manually declare a document controlled and make full use of the Documentum xCP and record management functions to suddenly control your documents on an enterprise level. All controlled and structured business processes will be fully managed by Documentum and depended on the type of worker you can offer them a full collaboration environment (A SharePoint interface to perform their controlled and uncontrolled tasks) or give them the focused and simple TaskSpace interface to perform standard controlled tasks. Last but not least for the real compliance/record freaks there is the record manager interface to maintain the filling plans, retention disposition, library functions, etc..

If you top this off with a Google search box to make real enterprise search (both structured and unstructured data alike) possible your ECM pie will be complete and tasty for everybody.

SharePoint the death of ECM?

May 25th, 2010 Comments off

After a full week of in-depth information about SharePoint 2010 at the devconn in Vegas it is time to try to structure my thoughts about it. As being a (almost) lifelong ECM architect and really happy with tools like Documentum and FileNet it is not easy not to be skeptic about SharePoint. Yes it is Microsoft and yes they threw a lot of money to it, but is MS able to make/copy a fully enterprise ready ECM solution within 3 years where others are trying already for 15 years or more.

The answer is like a lot of MS tools: yes and no:
Yes, SharePoint is by far the best interface to do any work on unstructured data.
No, SharePoint is not able to support an enterprise to maintain and control documents on an enterprise level.

Here are my results:

It was a developer conference so maybe I was in the wrong place for this, but I have not seen any presentation about how SharePoint is filling in the current ECM demand and vision. Everybody that I talked with or who presented where real experts on SharePoint and technical state of the art architects. The MPV’s of SharePoint really know their business. But when I start about how do you set up an enterprise architecture to support enterprise wide ECM it all boils down to creating technical workarounds or third party solutions to try to set your enterprise rules and regulations.
So SharePoint is the best tool to support collaboration in a workgroup environment where you in smal(ler) silo’s maintain your rules and work on the content.

It is interesting to see how all SharePoint experts except how good the interface of SharePoint actually is. When you are used to the interface of Documentum and suddenly are confronted with the SharePoint possibility to create lists, interact with Office, etc. you are stunned. All long time users really like the new interface but don’t see any more how much richer it is compared to a lot of others out there. The integration with Office to read or edit a document from within the office app is so easy and natural that people will really like to use it. The big yellow button in Word2010 to edit a document, the automated show of SharePoint properties in Word and the possibilities to change and use them within your documents are so good.

The possibility to create forms/lists from InfoPath is in 2010 very strong. This supports the no coding concept so much easier. You can customize a screen for a specific user role without very much technical skills or in-depth knowledge of SharePoint.

I was even more amazed how they set up the social computing options within SharePoint. One of the tough questions I get a lot is, how do I combine all the private social media that my workers use and the way they collaborate and share their knowledge, with the enterprise (controlled or semi-controlled) demands for collaboration and knowledge sharing.
In the new Sharepoint2010 options you are able to reuse all social media like Facebook, Twitter and LinkedIn but control them on a company level (see the comments on the enterprise problems of SharePoint). This gives users a real ‘I can configure it how I like it’ idea but still be able to control it from a central view.

Now the technical architecture side of things: This is where I don’t understand what SharePoint is doing or which way they are going. Within a site most of the data is still maintained in one SQL table. The default option is still to store documents in RDBMS blobs and all is still limited to the size of that table. Site collections have a limit because of this. Setting up an enterprise solution within a global organization is impossible, distributed solutions are a mess and if I talk to the independent experts they advise you not to follow the guidelines of Microsoft and set up a sort of sitecollection in sitecollection approach. On paper this looks nice but gives you tons of limitations because the replication between sitecollections is limited. Besides the architecture I also see some risks in the enterprise management of the customizations. If you have multiple groups working on different projects within a SharePoint solution, they will have a huge challenge to maintaining their customizations and coding and be sure of the impact of any change. SharePoint customizations are really designed for small groups of (preferably 1) consultant(s) who talks with the users and builds their individual needs.

And then there was workflow. This has always been one of my markers. Giving a demo of workflow management is always easy and it always looks so good, but when the real thing comes in workflow, or even worse, business process management is not easy.
When a company wants to use real BPM it needs to be able to maintain the workflows on a corporate level. Things like orchestration and overall process monitoring is a minimal requirement. SharePoint talks about workflow and even the option of business process management, but the tool is way, way off to get this wright. Doing some basic document routing and some basic document approval is possible, but when you want to do any enterprise workflow it will be a lot of C# coding that kicks in. Having an overview of all the steps in one workflow is impossible let alone a whole orchestration of business processes. If within an organization a running workflow needs to be changed, defining the impact of the change can only be done by a programmer and probably only by the programmer that created the workflow in the first place. This is not the way you want to go.

Overall I am impressed by what functionality SharePoint has to offer when it comes to collaboration and user experience. They have set the new standard on how users will want to interact with systems that control their unstructured data. On an enterprise level using SharePoint is still tricky. You really need to define your architectural design up front and use a silo-ed approach of maintaining your data and customization. Otherwise you will end up with a mess or a not performing system. So yes SharePoint will challenge all other vendors to finally put way more effort in the user experience or die, and NO SharePoint is still a long long way off from being a competitor for real ECM solutions like Documentum or FileNet.

Combining the two worlds is still the best way forward.