Blog Archives

Using Web API Validation with jQuery Validate

Building on my last post about validating your model with Web API, if you’re calling a Web API controller from JavaScript you may need to parse the validation result and display it on the screen.

Most people using MVC would be using the jQuery Validate plugin that’s been included with the default template for quite a while now. While most validations are performed using JavaScript adapters, some are only performed server side. As a result, the regular unobtrusive JavaScript adapters will not catch this before the post occurs. This means that you if you are using JavaScript requests with Web API to handle data manipulation you will need to somehow manually handle the validation errors that will be returned.

Plugging into jQuery Validation is actually quite easy… To validate a form, simply select the form using jQuery and call .validate() on it – e.g.

var validator = $('.main-content form').validate();

This will return a validator object will a few handy methods on it. Two of which are valid() and showErrors(). The valid method will return a Boolean value indicating whether the form is valid or not and the showErrors method will show any validation errors on the current form. The showErrors method also accepts an object that defines any additional error messages you wish to display – e.g. to display the message “The title is incorrect” for a property named Title:

validator.showErrors({ Title: 'The title is incorrect.' });

Now, assuming I a view with the following mark-up inside the form, I should see a validation error:

<div class="editor-label">@Html.LabelFor(model => model.Title)</div>
<div class="editor-field">
    @Html.TextBoxFor(model => model.Title)
    @Html.ValidationMessageFor(model => model.Title)
</div>


But how do we connect this to Web API…? Well, if you’ve read my previous post you’ll recall that calling a Web API controller’s PUT action that’s decorated with the ValidateFilter attribute I created will return a collection of validation errors if the model is not valid. To test this, I’ll modify my TodoApiController from the previous post as follows:

[ValidateFilter]
public void Put(int id, TodoItem value)
{
    if (value.Title == "hi there")
        ModelState.AddModelError("Title", "The title is incorrect.");

    if (!ModelState.IsValid) return;

    db.Entry(value).State = EntityState.Modified;
    db.SaveChanges();
}

I should now receive a validation error whenever I try to update an item with the title “hi there”. Let’s write some jQuery to submit my form:

function updateItem(form, url) {
    var validator = form.validate(),
        serialized = form.serializeArray()
        data = { };

    if (!validator.valid()) { return; }

    // turn the array of form properties into a regular JavaScript object
    for (var i = 0; i < serialized.length; i++) {
        data[serialized[i].name] = serialized[i].value;
    }

    $.ajax({
        type: 'PUT', // Update Action
        url: url, // API Url e.g. http://localhost:9999/api/TodoApi/1
        data: data, // e.g. { TodoItemId: 1, Title: 'hi there', IsDone: false }
        dataType: 'JSON',
        success: function () { alert('success'); },
        error: function (jqXhr) { extractErrors(jqXhr, validator); }
    });

}

Now let’s look at extractErrors:

function extractErrors(jqXhr, validator) {

    var data = JSON.parse(jqXhr.responseText), // parse the response into a JavaScript object
        errors = { };

    for (var i = 0; i < data.length; i++) { // add each error to the errors object
        errors[data[i].key] = data[i].value;
    }

    validator.showErrors(errors); // show the errors using the validator object
}

Lastly, attaching to the form’s submit event will call this whenever the Enter key is hit or the Submit button is clicked:

$('.main-content form').submit(function () {
    updateItem($(this), '/api/TodoApi/' + $('#TodoItemId').val());
});
Advertisement

Commerce Server 2009 R2 and Visual Studio 2010

So you’re a Commerce Server developer that’s sitting on the bleeding edge…? Well now you’ve got the same chance of starting your site as easily as you did with CS2007 and VS2008 – i.e. not much. Why, you ask…? Because the team have not updated the template that they use for the Project Creation Wizard addin, so it’s just as useful as it always has been… =p

Let’s see how it works…

Using the Project Creation Wizard

Just like in the old days of CS2007 and pre-R2, hit File > New Website and you’ll get the “New Web Site” wizard. Select your language of choice (C# is the better one) and “Commerce C# ASP.NET Web Application”.

image

Although for some odd reason, they’ve decided you can’t use the file system or any non-localhost url (like above), so at least when you first create the site you need to do so under localhost.

If you’re a real developer, you’ll say yes to this too… 🙂

image

You’ll then have the Commerce Server Site Packager application pop up to unpack a default web site. This will ask for the site url, but not ask any of the good old questions like what you want to name the application directories, meaning you’re stuck with a crappy prefix on every directory it unpacks.

Anyway, once that’s done you should have an unpacked beginning of a web site. If you go to this point without a couple of COM errors, then congratulations. But now you’ll also notice that the project created was a “Web Site” project instead of a “Web Application” project.

So what’s next… oh yeah! Find another way to do it that actually makes sense.

Manual Site Creation

If you’ve been through the process of using the Commerce Server Site Package application to extract a site in previous versions of commerce server, then you’re not going to learn much here… nothing has changed! If you haven’t, please read on.

Well if you’ve been through the Project Creation Wizard then you have two things up your sleeve, you have a good web.config to start from and a csapp.ini file that points to the original PUP file used to unpack the empty website. If you haven’t, then don’t worry – I’ve uploaded the Commerce Server 2009 Starter Files to my SkyDrive and I can tell you that the original PUP file lives at “C:\Program Files (x86)\Microsoft Commerce Server 9.0\Extensibility Kits\Samples\Pup Packages\empty.pup”.

Now you can open the Commerce Server Site Package application manually at “C:\Program Files (x86)\Microsoft Commerce Server 9.0\PuP.exe”. It will ask you if you want to package or unpackage, but if you don’t have a site on your machine package will be disabled and unpackage will be selected. After hitting next, select the PUP file mentioned above and select Custom Unpack then hit next again. Then select create a new site and hit next.

Now you need to type a name into Site Name text box that does not conflict with any existing sites and click next – e.g. CommerceSample. This name is used by your web application to identify which site resources are used by the application because Commerce Server allows you to have multiple “Sites” on a machine. Then you’ll want to unpack all the resources available and click next. Click next again to create the authentication and profiling resources.

Now you will be setup all the database connections for each resource in the site.

image

Selecting a resource and clicking Modify allows you to set all the basic connection details. If you are using a remote database server, this is where you need to change the settings.

image

After modifying the connection strings as necessary and clicking next, you will be able to select the applications you want to unpack.

image

Each Site is “usually” made up of 5 applications – a Marketing, Orders, Profile and Catalog web service and a Web app. After selecting them all hit next.

You can now rename any or all of the applications and change the web site in IIS that they will be hosted on and virtual directory they will be under.

image

After a few seconds, you will be asked to provide some scripts. Click next twice to skip this.

After about a minute, you will be notified of whether the database connections were successfully set up. Click next to continue. You will then be notified of whether all the resources were extracted successfully. Click done.

Now, to get the web config into the right place, find the location of the web application that was extracted and drop in the web.config. This folder will have 3 files in it before you drop in the config file– csapp.ini, OrderObjectMappings.xml and OrderPipelineMappings.xml.

And that’s it! Well, not really… you now have extracted a Commerce Server web site, but it will not run. Now you’re in another world of pain called “Configuring a Commerce Server Site”.

Want Open Search Integration in Your Website…?

Over the past few weeks, Tatham Oddie, Damian Edwards and myself have been working on publishing a framework/toolkit for integration OpenSearch into any ASP.NET search enabled website. I’m pleased to announce we have finally hit a release!

The project is available at opensearchtoolkit.codeplex.com. Tatham has a great post on how to integrate it into your site on his blog

OpenSearch is a technology that already has widespread support across the web and is now getting even more relevant with Internet Explorer 8’s Visual Search feature and the Federated Search feature in the upcoming Windows 7 release.

Now it’s time to make it even easier. Ducas Francis, one of the other members of my team, took on the job of building out our JSON feed for Firefox as well as our RSS feed for Windows 7 Federated Search. More formats, more fiddly serialization code. Following this, he started the OpenSearch Toolkit; an open source, drop-in toolkit for ASP.NET developers to use when they want to offer OpenSearch.

Today marks our first release.

So get on over to codeplex, hit up Tatham’s blog for instructions and drop the toolkit into your web site so you can take advantage of all the coolness that is OpenSearch.

Discovering Search Terms

More trawling through old code I had written brought this one to the surface. One of the requirements of the system I’m working on was to intercept a 404 (Page Not Found) response and determine if the referrer was a search engine (e.g. google) to redirect to a search page with the search term. Intercepting the 404 was quite easily done with a Http Module…

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text.RegularExpressions;
using System.Web;

namespace DemoApplication
{
    public class SearchEngineRedirectModule : IHttpModule
    {
        HttpApplication _context;

        public void Dispose()
        {
            if (_context != null)
                _context.EndRequest -= new EventHandler(_context_EndRequest);
        }

        public void Init(HttpApplication context)
        {
            _context = context;
            _context.EndRequest += new EventHandler(_context_EndRequest);
        }

        void _context_EndRequest(object sender, EventArgs e)
        {
            string searchTerm = null;
            if (HttpContext.Current.Response.StatusCode == 404
                && (searchTerm = DiscoverSearchTerm(HttpContext.Current.Request.UrlReferrer)) == null)
            {
                HttpContext.Current.Response.Redirect("~/Search.aspx?q=" + searchTerm);
            }
        }

        public string DiscoverSearchTerm(Uri url)
        {
            …
        }
    }
}

Implementing DiscoverSearchTerm isn’t that difficult either. We just have to analyse search engine statistics to see which ones are most popular and analyse the URL produced when performing a search. Luckily for us, most are quite similar in that they use a very simple format that has the search term as a parameter in the query string. The search engines I analysed included live, msn, yahoo, aol, google and ask. The search term parameter of these engines was either named “p”, “q” or “query”.

Now, all we have to do is filter for all the requests that came from a search engine, find the search term parameter and return its value…

public string DiscoverSearchTerm(Uri url)
{
    string searchTerm = null;
    var engine = new Regex(@"(search.(live|msn|yahoo|aol).com)|(google.(com|ca|de|(co.(nz|uk))))|(ask.com)");
    if (url != null && engine.IsMatch(url.Host))
    {
        var queryString = url.Query;
        // Remove the question mark from the front and add an ampersand to the end for pattern matching.
        if (queryString.StartsWith("?")) queryString = queryString.Substring(1);
        if (!queryString.EndsWith("&")) queryString += "&";
        var queryValues = new Dictionary<string, string>();
        var r = new Regex(
        @"(?<name>[^=&]+)=(?<value>[^&]+)&",
        RegexOptions.IgnoreCase | RegexOptions.Compiled
        );
        string[] queryParams = { "q", "p", "query" };
        foreach (var match in r.Matches(queryString))
        {
            var param = ((Match)match).Result("${name}");
            if (queryParams.Contains(param))
                queryValues.Add(
                ((Match)match).Result("${name}"),
                ((Match)match).Result("${value}")
                );
        }
        if (queryValues.Count > 0)
            searchTerm = queryValues.Values.First();
    }
    return searchTerm;
}

The above code uses two regular expressions, one to filter for a search engine and the other to separate the query string. Once it’s decided that the URL is a search engine’s, it creates a collection of query string parameters that could be search parameters and returns the first one.

Unfortunately, there wasn’t enough time in the iteration for me to properly match the search engine with the correct query parameter, but as is most commonly the parameter comes into the query string quite early so it’s fairly safe to assume that the first match is correct.