In Part 1, I walked through creating a simple form with a backing ViewModel and Validation. In Part 2, I’ll walk through creating a backing Model and Edit functionality.
To start off from here, load up the code from part1: https://github.com/edandersen/mvcformtutorial/tree/part1
The final code for Part 2 will be uploaded here: https://github.com/edandersen/mvcformtutorial/tree/part2
Model Mapping flow
When editing a ViewModel, we need to prepopulate the view with real data from a domain model. This can be a database table mapped to an object via an ORM or data from another source. In the diagram above, we can see the GET Edit action requesting Model with an ID of 123 from the Repository holding the model data, creating a ViewModel that represents the Model and passing it onto the View to render. POSTing the data back is similar to the Create POST method in Part 1, except we load the existing Model from the repository, update it with the validated data from the ViewModel and update the model in the Repository.
I’m going to walk through the basics of Form submission with ASP.NET MVC, showing some best practices. This set of tutorials will be useful for developers moving away from ASP.NET WebForms into ASP.NET MVC or even Rails developers curious about how we do things in .NET.
You can download the code for Part 1 at: https://github.com/edandersen/mvcformtutorial/tree/part1
Form submission flow
If you have come from WebForms, you’ll be used to being able to pull form values out in the code behind by simply referencing variables in your code behind. These magically map to elements on the page and most of the time you are blissfully unaware how the data gets there. With MVC, we don’t have the same abstraction. Whereas you can access POSTed variables directly with FormsCollection (or params in Rails) but with the ViewModel pattern, we can simulate the binding that ASP.NET provides and access our form variables in a strongly typed manner.
Robots.txt is required to allow search engines to properly index your site, and more importantly not index it. If you have a public-facing staging or preliminary site that you don’t want to show up in Google results, you need to make sure that it returns the correct robots.txt with the
line to prevent indexing. However, manually adding robots.txt files to staging and production environments as a manual process can be improved with the process below – the same code can serve up a locked down robots.txt in staging or internal URLs, and allow indexing in production.
First, add a route that responds to /robots.txt in either Global.asax.cs or RouteConfig.cs before your Default routes:
controller = "Robots",
action = "RobotsText"
You’ll also need to make sure that runAllManagedModulesForAllRequests is true in web.config as normally text files bypass the ASP.NET pipeline:
The create a new controller called “RobotsController” with a single action “RobotsText”. All requests to /robots.txt will go here:
public class RobotsController : Controller
public FileContentResult RobotsText()
var contentBuilder = new StringBuilder();
// change this to however you want to detect a production URL
var isProductionUrl = Request.Url != null && !Request.Url.ToString().ToLowerInvariant().Contains("elasticbeanstalk");
return File(Encoding.UTF8.GetBytes(contentBuilder.ToString()), "text/plain");
You can try a number of ways of detecting a production environment, from the naïve URL checking above to environment variables in your application container.