Quick-fix Orphan Site Blocks Patch

Quick-fix Orphan Site Blocks Patch


After applying Cummulative Updates to SharePoint, it can happen that you are facing an issue with orphan sites in one or more content dbs.

As these are flagged as errors, the configuration wizzard that needs to be run after the bits have been applied to all servers, cannot complete the job and will fail.


06/29/2016 07:17:54.93 OWSTIMER (0x4D38) 0x4F84 SharePoint Foundation Upgrade SPContentDatabaseSequence ajxkz ERROR Database [SP_DVT_INSIGHT_ContentInsightDB_01] contains a site (Id = [54916e1a-54f3-43d0-ae74-d2915ab4e55f], Url = [/Workspaces/WS_002250]) that is not found in the site map. Consider detach and reattach the database. 39cc8a9d-a8ae-a04a-f205-a18cb49efb20


The solution is to detach the content db where the orphan site has been detected and then mont the database back in. After the database is back in its place, you can locate the orphan site and delete it. (If you are curious, you should be able to browse to that orphan site, just to double check there is no critical content in it).

high level steps

  • Detach mentioned content db
  • Mount mentioned content db back in
  • Delete orphan site
  • Restart configuration wizzard


Get-SPContentDatabase [DB] | Dismount-SPContentDatabase
Mount-SPContentDatabase [DB] -DatabaseServer “SP-SQL-FARM” -WebApplication [DEFAULT ZONE URL OF THE WEB APP]



Deprecated Features in SharePoint 2016

Deprecated Features in SharePoint 2016

SharePoint 2016 comes with strong focus on..

  • Improved User Experiences
  • Cloud-Inspired Infrastructure
  • Compliance and Reporting

…however, with the release of Microsoft SharePoint Server 2016, Microsoft has deprecated a few features.

Take a look and tell me what you think, especially about tags & notes. You will be surprised that tagging and notes for exmaple is now gone.


Duet Enterprise for Microsoft SharePoint and SAP

If you want to deploy Duet Enterprise for Microsoft SharePoint you must use SharePoint Server 2013 Enterprise Edition

Duet Enterprise for Microsoft SharePoint and SAP Server was a jointly developed product from SAP and Microsoft that enabled interoperability between SAP applications and SharePoint Server 2013 Enterprise Edition.


SharePoint Foundation 2013

SharePoint Foundation 2013 remains available for use

However, there is no equivalent to the free version of SharePoint Foundation 2013 in SharePoint 2016.


ForeFront Identity Manager client (FIM)

The default process is Active Directory Import. You can also use any synchronization tool such as Microsoft Identity Manager 2016, or any third-party tool.

Microsoft will soon release tools to assist in deploy and configuring Microsoft Identity Manager 2016 to work with SharePoint Server 2016 for identity synchronization.


Excel Services in SharePoint

Excel Services and its associated business intelligence capabilities are no longer hosted on SharePoint Server.

Excel Services functionality is now part of Excel Online in Office Online Server


Standalone Install mode

SharePoint Server 2016 doesn’t support the standalone install option, so it is no longer available in the setup program.

Use the MinRole during installation and choose one of the available install options. The Single Server Farm option where everything is installed on the same computer is supported for dev/test/demo purposes. When you use this option, you must install SQL Server yourself and then run the SharePoint Server 2016 farm configuration wizard.


Tags & Notes

The Tags and Notes feature is deprecated in SharePoint Server 2016. Users can no longer create new tags and notes or access existing ones.

However, an administrator can archive all existing tags and notes by using the Export-SPTagsAndNotesData cmdlet. For exporting existing / archived tags and notes, use the following:

Export-SPTagsAndNotesData -Site <http://site.hanseatech.com> -FilePath


Last but not least Stsadm

Of course Microsoft – again 🙂 – recommends that you use Windows PowerShell when you perform command-line administrative tasks. The Stsadm command-line tool has been deprecated, but it is included to support compatibility with previous product versions.

Fresh Search Results with SharePoint

Fresh Search Results with SharePoint

How fresh do you want your search results?

Introduced with SharePoint 2013 we have got a new feature for Search in order to support better content freshness. Its called continious crawling. The name suggests SharePoint is crawling and processing content continiously. The truth is – by default – it is a type of incremental crawling in 15 minutes intervalls with a few extras.

Once we know something, we find it hard to imagine what it was like not to know it.

Chip & Dan Heath, Authors of Made to Stick, Switch

Continious crawling

…is a type of crawl with the purpose to maintain the index as current as possible by fixing the shortcommings the incremental crawling has.

Incremental crawling

…is sort a of crawling where existing content in the index is being crawled again – e.g. picking up changes.

Full crawling

…kicks off content discovery of the entire content source.


30 Minutes

15 Minutes

Every Sunday

Use continious crawling for SharePoint content – it is not availible for indexing external content e.g. BCS, File Share and Websites. With continious crawling enabled you will benifit from parallel indexing of content.

Imagine some major changes are happening on the content and the crawling needs more time than usual to process them. With continious crawling the next crawl wont wait for complition – he will kick off as scheduled and will process the latest changes – crawling is running in paralel.


Simon is a project manager and is introducing a lot of changes to large documents in a short time. During the changes the SharePoint crawling is kicking in and while this happens, Anita, who is working in finance is uploading her calculation and expects it to be aggregated and displayed in the finance portal by search driven webparts.

Without continious crawling

Anita has to wait for the current crawling to be finished, so that the next crawling can start and processing her calculation sheet. On some environments even incremental crawls can take up to 60 minutes. So in worst case Anita has to wait [60 Minutes + Time for the current crawl]. This behaviour will irritate the users – depending on how heavily the portal relys on search – it will likely cause some headaces to the management as users will complain if your portal is completely search driven.

With continious crawling

While a crawl is running to process the “deep” change, another crawl kicks in after 15 Minutes in parallel and is eventually processing Anitas excel sheet. Even if one crawling needs longer due to processing of “deep” changes, another crawl will begin its work as per schedule, without being bothered by any other crawls.

Use continious crawling on all SharePoint content sources and if your farm has the power .e.g. is fast responding to user requests during crawls and you want users to have super fresh search results – change the continious crawling schedule. It is 15 Minutes per default.


$IntervalMinutes = 10;
Write-Host "Changing Continuous Crawl Interval to $($IntervalMinutes) minutes"
$ssa = Get-SPEnterpriseSearchServiceApplication
$interval = (Get-SPEnterpriseSearchServiceApplication).GetProperty("ContinuousCrawlInterval") 
Write-Host "New continuous crawl interval set to $interval"
Changing continuous crawl interval

Zweiter Platz und ein Intra.NET Award für uns

Zweiter Platz und ein Intra.NET Award für uns

Zweiter Platz für das neue Intranet der PwC

Intra.NET RELOADED Berlin 2016

April 27 – 28, 2017

Park Inn Hotel Berlin | KOSMOS Cinema Berlin, Germany

Die nicht selten gewordenen 18 Stunden Tage in diesem Projekt haben sich “ausgezahlt”. Es macht wirklich spass mit einem so starken Team zusammen arbeiten zu dürfen.


Ganz knapp am ersten Platz - Royal DSM - vorbei

Royal DSM räumte mit 40% der Stimmen den ersten Platz ab.

Wir sind ganz knapp mit 38% der Stimmen vorbei geschrammt – aber mit viel Abstand nach hinten zu Swisscom mit 22%.

Der zweite Platz bei den Intra.NET Awards in Berlin in der Kategorie “Innovative Technology Integration” ist jetzt unser 🙂


Royal DSM





Ich glaube wir sind deshalb so erfolgreich mit unserem Intranet, weil wir nicht versuchen das Rad neu zu erfinden.

Wir setzen auf Match Point Snow von der Colygon AG und bauen mit ihnen und einem Partner Unternehmen, der Elca ein Digital Workplace für die PwC Schweiz auf.

Das bedeutet, dass wir künftig alle Informationsflüsse und Kollaborativen Prozesse auf eine neue Ebene heben.

Client Collaboration

Kunden und andere Firmen in unserem Netzwerk gelangen mittels Zwei Wege Authentifizierung auf das Client Portal, welches nahtlos innerhalb der Insight integriert ist.


Wissens Management

Mit der Integration von Starmind als das Digital Brain werden Know How Träger mit den Fragenden vernetzt.

Ganz egal wo man sich im Intranet aufhält, man kann jederzeit eine Frage absetzen und man erhält automatisch via Push Integration eine Meldung im Portal, sobald jemand geantwortet hat.


Alles setzt auf Starmind, MatchPoint Snow und SharePoint 2013 auf. Wir schauen aber schon neugierig auf SharePoint 2016

Analysing Storage Performance

Analysing Storage Performance

A critical view on the storage subsystem with DiskSpd

When it comes to SharePoint Performance – a fast Storage is key, so how do we measure storage for one and how we apply our usage pattern?

Microsoft superseeded SQLIO since DiskSpd release came out in 12/14/2015.

So we are dealing with DiskSpd from now on. In compaprsion to SQLIO DiskSpd brings a few (to myself) intesting features to the table.


New features:

  • Consumable XML output for automation support e.g. Scheduled analysis runs throughout the day powered by PowerShell
  • Custom CPU affinity options
  • Synchronisation and tracking functionality
  • Ability to target also physical disks
  • Variable read/write ratio

Purpose of DiskSpd

With DiskSpd we are Simulating Workload – specifically for SQL.
We are generating lots of IOPS. – some might say here Ayoub’s – whitch is my name and sounds very funny actually

To have clean tests:

  • If you are using iSCSI LUNS or SMB shares, you depend on the Network – make sure you are “alone”
  • If you are using SAN, make sure you dont have any other Systems consuming the shared resources – reduce the noise as much as possible.

New features:

  • Consumable XML output for automation support e.g. Scheduled analysis runs throughout the day powered by PowerShell
  • Custom CPU affinity options
  • Synchronisation and tracking functionality
  • Ability to target also physical disks
  • Variable read/write ratio


So let’s get our brain working with some more parameters and their meanings flying around. Strap your seatbelt – i am about decrypt a few things and put it in context with the real word.

What’s likely your setup?

You are running your servers on top of a virtualisation layer e.g. ESX / Hyper-V and your underlying storage….could be anything. It doesn’t really matter to us, as we don’t want to dig around into the storage architecture corner. But we need to know a few things from the storage engineers.

  • What is the block/stripe unit-size on the storage?
  • What is the blocksize on the guest ?

Got the feedback ? Blocksize on guests vs storage should be same. Take blocksize of guest or get the disks re-provisionend. Oops.

Alright that’s it…  but you can check it out by yourself, to be sure.

Run in any administration shell fsutil fsinfo ntfsinfo d:

Take Bytes per cluster / Storage offset = blocksize


Ideally you are having 64k block size on the storage and on the guest.

If you are dealing with SQL and you use iSCSI LUNs, format them as 64k, attach separate LUNs, and support separation by OS, Data files, Log files.

Hint: If you are on Hyper-V with motion enabled, ensure that vmdk anti affinity realm is doing its job and your are preventing to eventually having your drives sitting altogether on one LUN.

Let’s get started and download DiskSpd here.

Put it on drive C and use the following parameters.

  • -h disabling OS caching like the SQL server does
  • -t8 Number of threads – adjust this if you know the code and know how the app is talking to your sail box. If you have a chatty black box – leave it at 8 or even increase it
  • -c1G size of the data file in gigabytes. Leave it on 1G if your are dealing with SharePoint for example.
  • -w25 25% writes vs 75% reads – you are invited to play with this values.
  • -o8 queue depth/length per thread (no of remaining tasks in the queue)
  • -b64K Block size of your disks
  • -d60 duration of the test in seconds
  • -Z1G Workload test write source buffer for supplying random data for our write operations
  • -L Capture latency – we really want this


Before you do anything on the Systems in your corporation – align with the Sys Admins first and tell them what you are doing and let them know the impact of the testing. They will likely schedule this with you off hours when live systems will be affected.

.\diskspd.exe -b64K -d60 -o8 -t8 -h -r -w25 -L -Z1G -c1G c:\\io.perf

La voila – have fun with the data.


Your are interested in:

  • Latency
  • I/O per secound (read & write)
%d bloggers like this: