Archive

Author Archive

1+1=3

April 4th, 2018 Comments off

The whole concept of synergy intrigues me. The fact that people who work together can achieve more than if each of those people would work separately is an amazing concept. It’s a concept that is not utilized as often as it should be, but it’s a concept that we at Informed Consulting believe in very much. As a matter of fact, it’s the foundation we’re built on.

A big part of my task these past few weeks is to make sure the Young Professionals are learning as much as they can in their short traineeship. However, last week it was me who got taught a valuable lesson. The trainers of YSE were testing our Young Professional’s analytical skills. They had split them up in 3 groups and gave them an impossible amount of information. At the end of the week they needed to present their advice.

I myself would probably have separated from the rest and kept my findings a secret. My main goal would have been to be better than the other 2 groups. However, within 24 hours our Young Professionals concluded that if they worked together and shared the information they gathered, they would achieve a lot more. So they decided to create a Microsoft Team site where they shared everything.

A group of young, unexperienced, competitive Young Professionals didn’t only understand synergy, they used it to achieve a better end-result!

Young Professionals – Martijn: 1 – 0

REST services in Documentum

February 5th, 2014 1 comment

With the introduction of Documentum 7 EMC added REST services to their set of webservices. Of course they already had a set of SOAP webservices with their DFS framework. However in some cases people might prefer a RESTful webservice over a SOAP one and EMC seems to be one of those people as you see it being used more and more often. There are even rumors going on that the SOAP webservices might be removed from the product in the future but I don’t think this has been confirmed yet.

They also released a set of REST webservices you can use for yourself called “Documentum REST Services” and I decided to give them a try.
Read more…

Categories: Documentum, REST Services Tags:

Installing xCP2 – lessons learned

January 6th, 2014 Comments off

The past few weeks I’ve spent quite some time trying to install xCP2 from scratch. Since there are a lot of components to install and because I ran into some problems along the way this took me a lot longer than I’d expected.
To prevent people from making the same mistakes I did I thought I’d share my experience.
Read more…

Categories: Documentum, xCP Tags:

SEO in SP2013

August 16th, 2012 Comments off

​SEO in SP2013

Last week I saw the new SharePoint 2013. It’s now little bit smoother and the UI has been unified with other Microsoft products with the new “Metro” look. Still one of my (and others) first reactions were: Why would people upgrade from SP2010 to SP2013. None of us could really think of a reason.

One of the worst things in SP2010 was the SEO (Search Engine Optimization) so I tried to examine if this has gotten any better in SP2013. Too my delight this has gotten much much better. I will try to summarize the SEO changes.

Every page (if you’ve activated the SEO feature, which is done automatically when creating a publishing site) now has multiple property options and one of them is “Edit SEO Properties”.

SEO-SP2013-1

It will send you to a page with a lot of SEO options:

SEO-SP2013-2

It is really easy to change the basic SEO settings. I think the first three will speak for itself, but the next two have not been explained that well. These are all setting for the sitemap.xml file,  which can be generated automatically now by SharePoint. With these settings you can tell the search engines how important a page is within your site and how often the page usually changes (so how often should the page be indexed). With the last option you can even totally exclude the page for indexing.

Although you can change the settings for creating the sitemap.xml, this file is not created yet. To do this Microsoft has created a separate feature “Search Engine Sitemap” feature. This will create a timer job which will create and update the sitemap.xml file. Strangely enough this feature is not enabled automatically when creating a publishing site (or when activating the SEO feature).

Another very important point is SEO friendly URL’s. In SP2010 URLs often where not really SEO friendly. They contained /pages automatically and of course had .aspx in the end. Now you are able to create a SEO friendly URL.

SEO-SP2013-3

When clicking this link you are taken to a page where you can create a SEO friendly URL. SharePoint will automatically create one without the /pages and the .aspx at the end of the URL.

However now you are also able to create your own term in the URL!

SEO-SP2013-4

When adding a URL you can click on the “Browse” button on the right which will open the known term store management window. Just add a new term and select it and you can add it as your own SEO friendly URL.

One of the biggest changes in SEO the last few years was the introduction of the “Canonical” URL. For people who are not familiar with this I will give a short explanation. URL parameters can cause search engines to index the same page multiple times. To avoid this you can specify a canonical URL which will make sure the search engines will index all variations of a page as one big page. This is really important for external or internal linking.

The most logical way to implement this in my opinion would have been adding an extra field on the SEO Properties screen, letting people create their own canonical URLs. However Microsoft has chosen for an option on Site level. You can configure which URL parameters are valid and which should be ignored by search engines. However when you’ve created two SEO friendly URLs you do not have the option to combining them into one canonical URL. This is a missed opportunity in my opinion.

The other option presented on the Site settings -> Search Engine Optimization Settings screen is the option to provide a Meta tag for webmaster tools. To prove you’re the site owner you often have to upload a page in the root or add a Meta tag to the page. In previous versions this could be a hassle sometimes. This makes everything a lot easier.

HTML

Another point of critique in SP2010 was the HTML code which was created. It was often very messy and not W3C compliant. It’s still not W3C compliant. When checking the code with the W3C validator it will still find over 80 errors.

Still a lot of improvements have also been made around the HTML The HTML code is better to read and SharePoint has finally stopped using tables for the alignment and started using <div>’s.

Another big improvement is the way JavaScript is being used. In the past a lot of JavaScript files was being loaded even if most of it was not even necessary. In SP2010 the “RegisterSodDep” function was already available but wasn’t really used. This function loads JavaScript files only when they are necessary. In the past a standard publishing site could easily be over 500kb in size.  Now a standard publishing site is only 24kb in size, a huge improvement in page size which also the search engines will notice.

Conclusion

Microsoft has finally realized that SharePoint is more often and often used as a CMS for public facing websites. SEO is an important part of that and we have finally been given the tools to do the basic SEO optimizations.

Some steps are still to be made. Especially on the W3C compliant part but also in that area some steps have been made.

Is this a reason for people to upgrade to SP2013 from SP2010. If you have a public facing website and didn’t do anything to add SEO tools to your 2010 environment I definitely think so. This is a huge upgrade in that area and will definitely improve your search engine rankings.

Sources: http://blog.mastykarz.nl/search-engine-optimization-sharepoint-2013/
http://blog.amtopm.be/2012/07/19/branding-sharepoint-2013-changes-to-the-actual-html/

Categories: SharePoint Tags:

Path problems with the rich text editor

December 19th, 2011 Comments off

The rich text editor within SharePoint will often cause some problems. Except the fact that the created HTML code isn’t always that neat, it is also nearly impossible to use relative paths in your links or images since it will always include the domain you’re currently on in your link. This of course is really annoying, especially when you have a separate domain for editing your content.

However, a workaround is possible which solves your problem. With the help of JavaScript it is possible to change the value of all links on your page. By making sure this piece of JavaScript is included on every page you will never have the problem of accidently creating dead links anymore.

JQuery is a very popular JavaScript framework which is already installed on a lot of sites. Most of the time it makes JavaScript scripting a lot easier and less time consuming. You can use the next piece of script to change all links and images so that the right domain name is being set.


if (document.domain == 'www.publicurl.com')
{
 	$("a[href^='https://www.contenturl.com']") .each(function()
	{
		this.href = this.href.replace(/^https:\/\/www.\.contenturl\.com/, "http://www.publicurl.com");
	});
	$("img[src^='https://www.contenturl.com']") .each(function()
	{
		this.src = this.src.replace(/^https:\/\/www.\.contenturl\.com/, "http://www.publicurl.com");
	});
}

In the first line:
if (document.domain == ‘www.publicurl.com’)
a check is being done to make sure we are on the public domain at the moment. You don’t want the links to be changed when you are editing the content.
Next, all links are being searched which contain the value “https://www.contenturl.com”. After that a function is begin executed which changes the href of the link from “https://www.contenturl.com” to “http://www.publicurl.com”.

Exactly the same is being done for all images on the site where the “src” of all the “img” tags are being altered.

If JQuery is not already installed and difficult to install it is also possible to use ‘normal’ javascript for this. In this case it isn’t even that much more work to write.


if (document.domain == 'www.publicurl.com')
{
	for (i=0; i &lt; document.links.length; i++)
	{
		document.links[i].href = document.links[i].href.replace("https://www.contenturl.com",";http://www.publicurl.com");
	}
	for (i=0; i &lt; document.images.length; i++)
	{
		document.images[i].src = document.images[i].src.replace("https://www.contenturl.com","http://www.publicurl.com");
	}
}

This does exactly the same as the script above. It will find all links and images that contain “https://www.contenturl.com” and replaces this with “http://www.publicurl.com”.