Response.End terminating page events within a sharepoint page

I had a need to be able to present the user of a website with a prompt to open/save a pdf that had been dynamically generated based on inputs from the user during the completion of a form.
I wasn’t able to navigate away from the page where this option to download the pdf was presented to the user as i needed to remain within the context of the particular section within the form. Therefore the easiest way to provide the open/save option was through modifying the Page.Response as follows:

Page.Response.ContentType = “application/pdf”;

Page.Response.AddHeader(“content-disposition”, “attachment;filename=” + this.FormControl.FormDefinition.Name + “.pdf”);

Page.Response.Buffer =true;


Page.Response.OutputStream.Write(PdfBuffer, 0, PdfBuffer.Length);



This worked fine, the user would click on a download button, the open/save box would pop up and the pdf would be streamed exactly as I wanted it. However, when they closed the open/save box and returned to the page with the download button on, none of the buttons on the page would work anymore, the page was effectively ‘dead’. The confusing part, was that if I used exactly same html, code etc on a page outside of SharePoint I didn’t have any problems at all. This was specifically related to pages hosted within SharePoint.

After digging around I found a comment buried down within a thread ( where the following solution was proposed, and it works a treat:

In the page where your ‘download’ button resides, add the following code where you create the button control:

downloadButton.OnClientClick = “this.form.onsubmit = function() {return true;}”;

If using a LinkButton instead of a Button add the following instead:

exportButton .OnClientClick = “document.getElementsByTagName(\’form\’)[0].onsubmit = function() {return true;}”;

And everything then continues to work when you return to your page. The reason appears to be based on the fact that Sharepoint updates some kind of timestamp hash on the form before it is actually submitted to the server. This is done in order to prevent the form from being submit more than once if the user clicks before the Postback is completed, which is a good thing – except when trying to do what we are trying to do here.

Wil Wheaton Week (Geek Cred. +3)


Somehow this week the planets have aligned in such a way that my week has centred around Wil Wheaton, raising my Geek Credibility by at least plus 3.

Not only have I just finished reading Just A Geek by Wil Wheaton but I finally got around to watching Stand By Me (which was a 99p deal of the week rental from iTunes). The book is a fantastic read as not only does it give a personal insight into the frustrations and anxieties that he faced as a successful teenage actor from Stand By Me through Star Trek : TNG and beyond but it also acts as an interesting account of his rise from awkward, gangly youth to the ‘grown-up’ Geek that I follow on Twitter and Blog.

Another interesting point is that Wil is the same age as me, something I never realised when watching TNG, and reading his book I feel that I can relate to him in a number of ways (minus the ‘successful’ and ‘actor’ ones of course…). I too aspire to one day become a writer (he has obviously already achieved this, I have some way to go!!!) and the focus he puts on his family is something I believe in very strongly.

So I get +1 for reading the book and +1 for watching the film (if you haven’t seen it, I really recommend it. So many things that happen, phrases they say and general experiences bring back so many memories of my own childhood…).

The final +1 I get is for the awesome T-Shirt that I am wearing in the photo. If you haven’t heard of, go there and have a look. Its a custom T-Shirt website that allows people to submit their own designs for T-Shirts and the top 20 most popular can be ordered (they deliver to the UK and are very reasonably priced). I was the VERY LAST person to order Wil Wheaton’s ‘How we roll’ T-Shirt design and if you go to this link you’ll see proof that I was the person that made this T-Shirt go out of print as I ordered the last one!! I ordered it using an id that matches my Twitter account and my XBox Live Gamer tag.

So, how Geeky is THAT!?

Improved CAML IntelliSense

Don’t know why I missed this but its an essential add-in for Visual Studio 2005/2008. This guy has taken the core SharePoint schema files and extended them to add all that information from the SDK that is notoriously difficult to find. He’s done this by getting as much information as possible and sticking it in xs:annotation elements, and replacing as many xs:string types with enumerated types as possible so that we get the possible valid values.

A couple of examples that will make you go and download this straightaway:

  • When selecting a true/false type, he pops up an annotation that lets you know what type of true/false value you need, either TRUE, True, true etc etc
  • When you are creating a ListInstance and start typing the TemplateType, not only does it give you an enumeration of the various integer values that you can enter but it pops up an annotation describing what each one is

I honestly think that if you are doing SharePoint development this could easily save you a couple of hours a week!

Here’s a link to his post that describes it in more detail and where you can download the files : CAML.Net IntelliSense

Default values on a content type not set in a page layout

Now this took me some time to figure out…

Basically the problem I was having was that some of my fields, specifically date time fields, weren’t being populated with their default values of [Today]. I had set the default value correctly on the field and assigned the field to my Content Type which I had then applied to my Pages library (all through a Feature). Everything looked fine until I created a page, and hey presto all my fields were empty.

I thought at first it was to do with how I was provisioning my fields in xml but everything appeared fine, I then added a date time field through the GUI with a default value to my content type to make sure and that didn’t work either!

I then started looking at the settings on my Pages library to see if anything there was wrong. I did have multiple Content Types assigned to my library so I thought to avoid confusion I’d remove all the other Content Types and leave mine as the default. I created a page and all my default values were populated!! Aha, I thought, there’s a conflict somewhere or a corrupt Content Type. So I added them all back in, testing after each one and everything worked!

After further testing, it turns out that default values on fields in a Content Type on a page layout will ONLY get populated on the DEFAULT Content Type on a library. Definitely seems to be a bug… and is a problem if your multiple content types each have fields with default values, luckily for me that wasn’t the case.

An unhandled exception occurred in the user interface. Exception Information: OSearch (SearchAccount)

Had this not very nice error whilst trying to start the search service on my single server SharePoint installation through Central Administration:

An unhandled exception occurred in the user interface. Exception Information: OSearch (SearchAccount)

This took me a while to figure out as it appears that even though I wasn’t using a Domain account, the UI seems to expect a ‘Domain format’ account, so entering DevMachine\SearchAccount worked!


Day 3 of the SharePoint European Best Practices Conference

The final day of the conference got off to a great start when I discovered I had won a Sat Nav in the prize draw. Can’t complain!

The first talk I went to was Maurice Prather’ssession on best practices for creating, deploying and managing list and item events. This was very interesting as, not matter how much you think you know about this topic, there were plenty of useful nuggets to take away. One common theme that surfaced again in this talk was being careful what you package in a wsp to minimise the upgrade impact. I think the best approach is to logically group together bunches of functionality so that on an upgrade you will upgrade only the functional area you have changed rather than say all the event receivers. Other best practices:

  • In general its better to bind event receivers to content types rather than lists, caveats being some lists have multiple content types so its better in that case to bind to the list. Also, a document added event only uses the default content type
  • If PartiallyTrustedCallers are not allowed on your event receiver class you may get skewed results
  • Use DisableEventFiring and EnableEventFiring in your event receiver class to control recursion in events and prevent the ‘fan’ effect
  • There is no Site Added event, but if you have an auto-activated feature on a site, you can use the featureactivated event to simulate site added

The next session was Spencer Harbarand Andrew Connell’ssession on launching your MOSS publishing site. This had some repeated content from Andrew’s previous session on building high performance solutions but it’s such a broad subject that he was able to dig deeper in other areas. A few things I noted down from this event:

  • To get rid of the ActiveX warning message that you get the first time you go to a SharePoint site, see MS KB 931509
  • To get the ActiveX warning message to reappear after you have accepted it, go to IE settings->Manage Add-ons and disable the namectrl class and then delete it, though this didn’t seem to work in Andrew’s demo!
  • The ViewFormPagesLockDown feature needs to be reactivated when changing the anonymous settings on a site collection
  • Remember to enable AND configure your page output caching
  • The Blobcache maxsize in the web.config is in Gigabytes NOT Megabytes

During the lunch session got a chance to see a really nice Silverlight demo of a SharePoint site being tested in schools in Leeds by Andrew Woodward. It was very, very nice and a lot of people came away very impressed. A few people asked about the accessibility of using Silverlight which always happens. My feelings about accessibility is that its a means to get the core information out of the site to the user, so why can’t you have one mechanism for one and another for the rest, I get fed up of trying to deploy a single (often lowest common denominator) solution…

After lunch it was time for Chris O’Brien’ssession on the various approaches to and the best practices for deploying SharePoint sites through multiple environments. Been looking forward to this one, as I have a lot of time for Chris (we worked together for a while a few months back) and I have been helping him a little with the Kivati piece of his talk. One thing I noted during the demo was that his Content Deployment Wizardcan now import multiple .cmp files in one go. I’m pretty sure it hasn’t always been able to do that, plus you can command-line script it now which is very handy. I’m thinking of writing a deployment post myself to cover integration of Powershell and the command-line content deployment wizard to give you a nice complete deployment solution… Very good talk by Chris, he comes across very relaxed on stage and I’m pretty sure that he had the most questions after a session than a lot of the others which shows the interest in this area. I have to say a big thanks to Chris for pimping me out as the ‘Kivati guy’, hopefully that can only be a good thing, though the phone hasn’t rung yet 🙂

Final session of the day was Eric Shupps’session on high performance programming for SharePoint developers. This was very interesting as it went into quite a low level at some points and the demos gave good timing comparisons for various techniques to get list items. An interesting thing he mentioned is that every piece of data in a SharePoint site collection, no matter where it is in the site collection all goes into one SQL table, which emphasis the importance of optimised data access. A few tips here:

  • Use the Search API for cross-site queries on lists, use targeted scopes to allow for more frequent indexing. I really like this idea and can see lots of uses, and not only that but the performance is really good, the only caveat being that the data could be stale (depending on the frequency of your index) so it may not fit all situations
  • Get your data either via web services or search api and then use LINQ to provide a relational data layer to aggregate data across lists

We then had the Q&A session for developers and yet again the panel all worked so well together on stage so that not only was it very informative but also very entertaining. My two questions got answered, one being what’s the best way to monitor that your events are still firing as expected, and the answer was that when an event fails there is nothing logged, so you’d have to implement some sort of tracing/CRUD mechanism that would occasionally run, maybe overnight?, so that you were testing in a controlled fashion. Now I’ve thought about this, and probably the only way to do this would be to have a test harness that would interrogate a list that has an event attached, before triggering an event it would write to a SQL table, the event would then be responsible for updating the SQL table. That way you could see if any weren’t firing? Future project for me I think…

My other question was about how many gallons exactly was Eric’s hat, and the answer is that there is no such thing anymore there are only high and low ones now apparently but he thought the question was funny, at least he didn’t hit me anyway…

SharePint in the evening was great, chatted to loads of people and really enjoyed myself. Really had a great time over the last 3 days and will definitely go again, I just need to convince Chris to talk about Kivati again and then convince him that he needs me to travel with him to the next conference…;-)

Day 2 of the SharePoint European Best Practices Conference

Day 2 in sunny’ish London and a lovely start to the day with croissants and tea. I got there nice and early, had a wander around and met some nice people, including Philip Broadbery from a growing company called Fernhart New Mediathat seems to be going places…

The first talk I attended was Eric Shupps’talk on designing and deploying enterprise branding solutions using custom site definitions. This seems to be a contentious issue between some of the MVP’s with some interesting arguments in the Q&A session at the end of the day regarding declarative versus code solutions. I’m leaning more towards the code solution at the moment, but my recent experiences with site definitions has debunked a number of myths around their complexity and I’m not as scared of them as I once was… Eric was a very entertaining talker, a large presence on the stage, particularly in his large cowboy hat! Couple of useful tips I got from this included separating out the features and site definitions into different wsp’s. Quite often they get bundled together in deployments into one solution but what this means is that when you upgrade, the infrastructure will always recycle the app pools because you are re-provisioning onet.xml files etc, when quite often they haven’t changed and its only really the features that have changed. Having them separate alleviates that problem and makes live deployments less ‘impactful’. Another tip was a pointer to Patterns and Practices SharePoint Guidanceon codeplex which is a great resource for MOSS developers.

Next talk was by Ben Robbon customizing the Search UI in SharePoint 2007. This was actual quite a useful talk, as it was particularly pertinent to some problems we were facing with my current client. He suggested using the out-of-the-box search web parts where possible, and customising them using Xslt rather than re-inventing the wheel. He mentioned a few SEO things to help your sites, including organising your sites logically and having an up-to-date sitemap file in the root of your site. One suggestion was to have a custom timer job that uses the sitemap provider on the site collection to regenerate the sitemap xml file and then re-copy to the root.  A simple idea, but one I don’t think I’ve seen done anywhere…

Todd Bleekerwas up next to talk about best practices for creating custom field types. The amount of information that this guy can cram into a session is amazing. I was physically exhausted afterwards…! Some really good content here (too much to remember, let alone post but I should get that DVD so I’ll follow up). His main recommendation was to utilise User Controls for the UI of the control class of a custom field type and use a Value class to enable serialisation into a friendly object. This gives you the nice logical separation of ui/code and also the design surface and intellisense within Visual Studio. This is instead of using the rendering template xml that is currently the only documented way of doing this. Use the FieldEditorUserControl property in the fldtypes.xml file to point to your user control.

Andrew Connellswiftly followed with a talk on building high performance solutions on MOSS 2007. Again another talk crammed with content. Key things I picked out of this, was to minimise the perception of page load by using a lazy-load core.js (see Microsoft Knowledge Base Article 933823); look at image stitching/clustering to save on http requests for page furniture. This sort of works along the same principles as sprite batch files, where your images are in one long panorama, that you index with an offset of the image width, though its done via cropping in css. He suggested a company that has some useful tools to help with this as I imagine it would be a bit of a pig to do manually. The company was Get Run-time Page Optimizer.Another big area that can lead to rapid improvements is IIS Http compression. Dynamic compression level isn’t set by default for IIS7 sites, if you start by setting the level at 9 (10 is the max, but the CPU hit just isn’t worth it) and then scale it down until your cpu usage is acceptable it can give you performance improvements of 20%+ just by ‘checking a box’…

That was the last talk of the day but it was followed by a Developer Panel Q&A session which was quite interesting though difficult to write down. Lots of interesting talks about site definitions versus feature stapling, defining meta data at the site level (consensus seemed to be, set it on the default page of the site as its essentially the same thing). Hope these sessions are included on the DVD we get. Will let you know!

Anyway more fun tomorrow!