272 83 16MB
English Pages [88] Year 2015
7+(0,&5262)7-2851$/)25'(9(/23(56
$35,/92/12
PDJD]LQH
$]XUH1RWLÀFDWLRQ DQG(YHQW+XEV
$]XUH1RWLÀFDWLRQ+XEV%HVW3UDFWLFHV IRU0DQDJLQJ'HYLFHV
6DUD6LOYD
(YHQW+XEVIRU$QDO\WLFVDQG9LVXDOL]DWLRQ
%UXQR7HUNDO\
$XWRPDWH&UHDWLQJ'HYHORSLQJDQG 'HSOR\LQJ$]XUH:HEVLWHV
*\DQ-DGDO
9LVXDOL]H6WUHDPLQJ'DWDWKH(DV\:D\ ZLWK2'DWD
/RXLV5RVV
''UDZLQJ7HFKQLTXHVDQG/LEUDULHV IRU:HE*DPHV
0LFKDHO2QHSSR
$XWKRULQJ'HVLUHG6WDWH&RQÀJXUDWLRQ &XVWRP5HVRXUFHV
5LWHVK0RGL
0415msdn_CoverTip_8x10.75.indd 1
&2/8016 &877,1*('*(
4XHU\DEOH6HUYLFHV
'LQR(VSRVLWRSDJH
:,1'2:6:,7+&
9LVXDO&%ULQJV 0RGHUQ&WR/HJDF\&RGH .HQQ\.HUUSDJH
'$7$32,176
()&RGH)LUVW0LJUDWLRQV IRU0XOWLSOH0RGHOV -XOLH/HUPDQSDJH
7(67581
0XOWL&ODVV/RJLVWLF 5HJUHVVLRQ&ODVVLÀFDWLRQ -DPHV0F&DIIUH\SDJH
02'(51$336
$0RELOH)LUVW$SSURDFKWR 0RGHUQ$SS'HYHORSPHQW 5DFKHO$SSHOSDJH
'21·7*(70(67$57('
6LULDQG&RUWDQD
'DYLG3ODWWSDJH
3/11/15 10:52 AM
0415msdn_CoverTip_8x10.75.indd 2
3/11/15 10:52 AM
THE MICROSOFT JOURNAL FOR DEVELOPERS
APRIL 2015 VOL 30 NO 4
magazine
Azure Notification and Event Hubs.............20, 30
COLUMNS
Azure Notification Hubs: Best Practices for Managing Devices
Sara Silva . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Event Hubs for Analytics and Visualization
Bruno Terkaly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Automate Creating, Developing and Deploying Azure Websites
Visualize Streaming Data the Easy Way with OData 2D Drawing Techniques and Libraries for Web Games
0415msdn_C1_v1.indd 1
WINDOWS WITH C++
Visual C++ 2015 Brings Modern C++ to Legacy Code DATA POINTS
EF6 Code First Migrations for Multiple Models TEST RUN
Multi-Class Logistic Regression Classification
James McCaffrey, page 70
MODERN APPS
Michael Oneppo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
....................................................................
Dino Esposito, page 6
Julie Lerman, page 16
Louis Ross . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Ritesh Modi
Queryable Services
Kenny Kerr, page 10
Gyan Jadal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Authoring Desired State Configuration Custom Resources
CUTTING EDGE
A Mobile-First Approach to Modern App Development Rachel Appel, page 76
DON’T GET ME STARTED
62
Siri and Cortana
David Platt, page 80
3/11/15 8:17 AM
Untitled-1 2
2/3/15 10:46 AM
Untitled-1 3
2/3/15 10:46 AM
magazine
APRIL 2015 VOLUME 30 NUMBER 4 Director Keith Boyd Editorial Director Mohammad Al-Sabt [email protected] Site Manager Kent Sharkey Editorial Director, Enterprise Computing Group Scott Bekker Editor in Chief Michael Desmond Features Editor Lafe Low Features Editor Sharon Terdeman Group Managing Editor Wendy Hernandez Senior Contributing Editor Dr. James McCaffrey Contributing Editors Rachel Appel, Dino Esposito, Kenny Kerr, Julie Lerman, Ted Neward, David S. Platt, Bruno Terkaly, Ricardo Villalobos Vice President, Art and Brand Design Scott Shultz Art Director Joshua Gould
Director, Audience Development & Lead Generation Marketing Irene Fincher Director, Client Services & Webinar Production Tracy Cook
CORPORATE ADDRESS 1105 Media, 9201 Oakdale Ave. Ste 101, Chatsworth, CA 91311 www.1105media.com
Vice President, Lead Services Michele Imgrund
Chief Revenue Officer Dan LaBianca Chief Marketing Officer Carmel McDonagh
Senior Director, Audience Development & Data Procurement Annette Levee
Director, Lead Generation Marketing Eric Yoshizuru
ART STAFF
Director, Custom Assets & Client Services Mallory Bundy
Creative Director Jeffrey Langkau
Editorial Director, Custom Content Lee Pender
Associate Creative Director Scott Rovin
Senior Program Manager, Client Services & Webinar Production Chris Flack
Senior Art Director Deirdre Hoffman Art Director Michele Singh Assistant Art Director Dragutin Cvijanovic Graphic Designer Erin Horlacher Senior Graphic Designer Alan Tao Senior Web Designer Martin Peace PRODUCTION STAFF Director, Print Production David Seymour Print Production Coordinator Anna Lyn Bayaua ADVERTISING AND SALES Chief Revenue Officer Dan LaBianca Regional Sales Manager Christopher Kourtoglou Account Executive Caroline Stover Advertising Sales Associate Tanya Egenolf ONLINE/DIGITAL MEDIA Vice President, Digital Strategy Becky Nagel
Printed in the U.S.A. Reproductions in whole or part prohibited except by written permission. Mail requests to “Permissions Editor,” c/o MSDN Magazine, 4 Venture, Suite 150, Irvine, CA 92618. LEGAL DISCLAIMER The information in this magazine has not undergone any formal testing by 1105 Media, Inc. and is distributed without any warranty expressed or implied. Implementation or use of any information contained herein is the reader’s sole responsibility. While the information has been reviewed for accuracy, there is no guarantee that the same or similar results may be achieved in all environments. Technical inaccuracies may result from printing errors and/or new developments in the industry.
LEAD SERVICES President Henry Allain
ID STATEMENT MSDN Magazine (ISSN 1528-4859) is published monthly by 1105 Media, Inc., 9201 Oakdale Avenue, Ste. 101, Chatsworth, CA 91311. Periodicals postage paid at Chatsworth, CA 91311-9998, and at additional mailing offices. Annual subscription rates payable in US funds are: U.S. $35.00, International $60.00. Annual digital subscription rates payable in U.S. funds are: U.S. $25.00, International $25.00. Single copies/back issues: U.S. $10, all others $12. Send orders with payment to: MSDN Magazine, P.O. Box 3167, Carol Stream, IL 60132, email [email protected] or call (847) 763-9560. POSTMASTER: Send address changes to MSDN Magazine, P.O. Box 2166, Skokie, IL 60076. Canada Publications Mail Agreement No: 40612608. Return Undeliverable Canadian Addresses to Circulation Dept. or XPO Returns: P.O. Box 201, Richmond Hill, ON L4B 4R5, Canada.
Project Manager, Lead Generation Marketing Mahal Ramos Coordinator, Lead Generation Marketing Obum Ukabam MARKETING Chief Marketing Officer Carmel McDonagh Vice President, Marketing Emily Jacobs Senior Manager, Marketing Christopher Morales ENTERPRISE COMPUTING GROUP EVENTS Senior Director, Events Brent Sutton Senior Director, Operations Sara Ross Director, Event Marketing Merikay Marzoni Events Sponsorship Sales Danna Vedder Senior Manager, Events Danielle Potts Coordinator, Event Marketing Michelle Cheng Coordinator, Event Marketing Chantelle Wallace
MEDIA KITS Direct your Media Kit requests to Chief Revenue Officer Dan LaBianca, 972-687-6702 (phone), 972-687-6799 (fax), [email protected] REPRINTS For single article reprints (in minimum quantities of 250-500), e-prints, plaques and posters contact: PARS International Phone: 212-221-9595. E-mail: [email protected]. www.magreprints.com/QuickQuote.asp LIST RENTAL This publication’s subscriber list, as well as other lists from 1105 Media, Inc., is available for rental. For more information, please contact our list manager, Jane Long, Merit Direct. Phone: 913-685-1301; E-mail: [email protected]; Web: www.meritdirect.com/1105 Reaching the Staff Staff may be reached via e-mail, telephone, fax, or mail. A list of editors and contact information is also available online at Redmondmag.com. E-mail: To e-mail any member of the staff, please use the following form: [email protected] Irvine Office (weekdays, 9:00 a.m. – 5:00 p.m. PT) Telephone 949-265-1520; Fax 949-265-1528 4 Venture, Suite 150, Irvine, CA 92618 Corporate Office (weekdays, 8:30 a.m. – 5:30 p.m. PT) Telephone 818-814-5200; Fax 818-734-1522 9201 Oakdale Avenue, Suite 101, Chatsworth, CA 91311 The opinions expressed within the articles and other contentsherein do not necessarily express those of the publisher.
Senior Site Producer, News Kurt Mackie Senior Site Producer Gladys Rama Site Producer Chris Paoli Site Producer, News David Ramel Senior Site Administrator Shane Lee Site Administrator Biswarup Bhattacharjee Senior Front-End Developer Rodrigo Munoz Junior Front-End Developer Anya Smolinski Executive Producer, New Media Michael Domingo Office Manager & Site Assoc. James Bowling
Chief Executive Officer Rajeev Kapur
Chief Operating Officer Henry Allain
Senior Vice President & Chief Financial Officer Richard Vitale
Executive Vice President Michael J. Valenti
Vice President, Information Technology & Application Development Erik A. Lindgren
Chairman of the Board Jeffrey S. Klein
2 msdn magazine
0415msdn_Masthead_v1_2.indd 2
3/11/15 8:17 AM
Untitled-5 1
3/9/15 3:24 PM
Editor’s NotE
MICHAEL DESMOND
Fished Out I remember one of the first IT articles I ever wrote. I was living in Chicago, writing occasional freelance articles for a weekly computing tabloid while working a temp job for a marketing outfit that shilled cigarettes in bars. Good times. The article was about a floppy disk-borne virus outbreak, which in itself was hardly news. Most malware at the time got onto PCs via the 5.25- or 3.5-inch floppy disks used to move data and install applications. What was newsworthy was that the virus showed up on new diskettes sold by a local electronics store.
But with many searches now conducted over secure HTTPS links, Superfish had a problem. It needed a way into those encrypted conversations. A quarter-century ago, malware-infected media was a byproduct of regrettable negligence by a manufacturer. Today, manufacturers infect hard drives for fun and profit. Take the (rather egregious) example of computer maker Lenovo. The company had been outfitting its consumer line of Yoga laptops and convertibles with a pernicious bit of software called Superfish, which Lenovo has described as “visual discovery software,” whatever the heck that’s supposed to mean. In point of fact, Superfish was a man-in-the-middle (MITM) exploit used to monitor Lenovo users’ Web searches and inject its own ads into the results as they were returned back by the remote server. But with many searches now conducted over secure HTTPS links, Superfish had a problem. It needed a way into those encrypted conversations.
Which is when things went seriously off the rails at Lenovo. As part of the bundling deal, Lenovo pre-installed into the Windows trusted root store of each of its systems a self-signed, private root certificate from Superfish, essentially imbuing Superfish with all the powers of a certificate authority. The SSL certificates that Superfish presented to intercept traffic were chained to this root certificate, causing the browser to fully trust the certificates Superfish presented. As Rick Andrews, senior technical director for Trust Services at Symantec, wrote in a blog soon after the Superfish deal blew up: “Pre-installing any root that does not belong to an audited Certificate Authority and marking it as trusted undermines the trust model created and maintained by platform vendors, browser vendors and Certificate Authorities.” It sure does. Worse, the associated private key set up on each computer was encrypted using the same dead-simple password— komodia—which is literally the name of the company that provided the ad injection software for Superfish. I am not making this up. Robert Graham in his Errata Security blog (bit.ly/18mAiO0) describes how he was able to quickly extract the Superfish certificate and crack the private key password. Keep in mind, that password is the same for all affected Lenovo systems. Any PC with the Superfish certificate installed is vulnerable to having its Web communications intercepted. Microsoft stepped in quickly, updating its Defender and Security Essentials tools to spot and remove Superfish installations, and Lenovo came to its senses a bit later with a removal tool of its own. None of which resolves the truly galling part of this episode. According to a Forbes report (onforb.es/1ErfZeR), Lenovo probably earned between $200,000 and $250,000 in the bundling deal—barely a rounding error on Lenovo’s $14.1 billion in revenue in the third quarter. Yet that was enough to motivate a first-line computer manufacturer to completely undermine the root-level security of its paying customers. OK, so forget the profit. Maybe manufacturers are infecting hard drives just for the fun of it at this point.
Visit us at msdn.microsoft.com/magazine. Questions, comments or suggestions for MSDN Magazine? Send them to the editor: [email protected]. © 2015 Microsoft Corporation. All rights reserved. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, you are not permitted to reproduce, store, or introduce into a retrieval system MSDN Magazine or any part of MSDN Magazine. If you have purchased or have otherwise properly acquired a copy of MSDN Magazine in paper format, you are permitted to physically transfer this paper copy in unmodified form. Otherwise, you are not permitted to transmit copies of MSDN Magazine (or any part of MSDN Magazine) in any form or by any means without the express written permission of Microsoft Corporation. A listing of Microsoft Corporation trademarks can be found at microsoft.com/library/toolbar/3.0/trademarks/en-us.mspx. Other trademarks or trade names mentioned herein are the property of their respective owners. MSDN Magazine is published by 1105 Media, Inc. 1105 Media, Inc. is an independent company not affiliated with Microsoft Corporation. Microsoft Corporation is solely responsible for the editorial contents of this magazine. The recommendations and technical guidelines in MSDN Magazine are based on specific environments and configurations. These recommendations or guidelines may not apply to dissimilar configurations. Microsoft Corporation does not make any representation or warranty, express or implied, with respect to any code or other information herein and disclaims any liability whatsoever for any use of such code or other information. MSDN Magazine, MSDN and Microsoft logos are used by 1105 Media, Inc. under license from owner.
4 msdn magazine
0415msdn_DesmondEdNote_v4_4.indd 4
3/11/15 8:21 AM
Untitled-1 1
12/2/14 4:42 PM
Cutting EdgE
DINO ESPOSITO
Queryable Services It’s getting more common for companies to expose their back-end business services as plain old HTTP endpoints. This type of architecture requires no information about databases and physical data models. Client applications don’t even need references to database-specific libraries such as Entity Framework. The physical location of the services is irrelevant and you can keep the back end on-premises or transparently move it to the cloud. As a solution architect or lead developer, there are two scenarios you should be prepared to face when adopting this strategy. The first is when you have no access whatsoever to the internal workings of the services. In that case, you aren’t even in a condition of asking for more or less data to fine-tune the performance of your client application.
If the sealed API is a problem for the client application you’re developing, the only thing you can do is wrap the original services in an additional proxy layer. Then you can use any tricks that serve your purposes, including caching, new data aggregates and inserting additional data. From an architectural standpoint, the resulting set of services then shifts from the infrastructure layer up to the domain services layer. It may even be higher at the application layer (see Figure 1).
The Read Side of an API
Modern Web applications are built around an internal API. In some cases, this API becomes public. It’s remarkable to consider that ASP.NET 5 vNext pushes an architecture in which ASP.NET MVC and the Razor engine provide the necessary infrastructure for generating HTML views. ASP.NET Web API represents the ideal infrastructure for handling client requests coming from clients other than browsers and HTML pages. In other words, a new ASP.NET site is ideally devised as a thin layer of HTML around a possibly sealed set of back-end services. The team in charge of the Web application, though, is now also the owner The second scenario is when you’re also responsible for main- of the back-end API, instead of the consumer. If anyone has an issue taining those back-end services and can influence the public API to a with that, you’ll hear his complaints or suggestions. certain extent. In this article, I’ll focus primarily on the latter scenario. Most consumer API issues concern the quantity and quality of I’ll discuss the role of specific technologies to implement queryable the data returned. The query side of an API is typically the trickiest services in a flexible way. The technology I’ll use is OData services on to create because you never know which way your data is being top of ASP.NET Web API. Nearly everything discussed in this article requested and used in the long run. The command side of an API applies to existing ASP.NET platforms and ASP.NET 5 vNext, as well. is usually a lot more stable because it depends on the business domain and services. Domain services do someSealed Back-End Services times change, but at least they maintain a different Presentation Before getting into queryable service design, I’ll briefly and often slower pace. explore that first scenario in which you have no conTypically, you have a model behind an API. The query Application trol over available services. You’re given all the details side of the API tends to reflect the model whether you need to make calls to those services, but have no the API has a REST or an RPC flavor. Ultimately, the Domain way to modify the amount and shape of the response. sore point of a read API is the format of the data it Ad Hoc Such sealed services are sealed for a reason. They’re returns and the data aggregates it supports. This issue Services part of the official IT back end of your company. has a friendly name—data transfer objects (DTOs). Those services are part of the overall architecture When you create an API service, you build it from Infrastructure and aren’t going to change lightheartedly. As more an existing data model and expose it to the outside Sealed Services client applications depend on those services, chances world through the same native or custom model. are your company is considering versioning. For years, software architects devised applications Generally speaking, though, there has to be a Figure 1 From Sealed in a bottom-up manner. They always started from compelling reason before any new release of those Services to More Flexible the bottom of a typically relational data model. This services is implemented. model traveled all the way up to the presentation layer. Application Services
Most consumer API issues concern the quantity and quality of the data returned.
6 msdn magazine
0415msdn_EspositoCEdge_v5_6-9.indd 6
3/11/15 10:41 AM
One source of truth See all your data. Boost performance. Drive accountability for everyone.
Mobile Developers End-to-end visibility, 24/7 alerting, and crash analysis.
Front-end Developers Deep insights into your browser-side app’s engine.
IT Operations
App Owners
Faster delivery. Fewer bottlenecks. More stability.
Track engagement. Pinpoint issues. Optimize usability.
Move from finger-pointing blame to data-driven accountability. Find the truth with a single source of data from multiple views. newrelic.com/truth
©2008-15 New Relic, Inc. All rights reserved.
Untitled-3 1
1/13/15 11:56 AM
Depending on the needs of the various client applications, some DTO classes were created along the way to ensure that presentation could deal with the right data in the right format. This aspect of software architecture and development is changing today under the effect of the growing role of client applications. While building the command side of a back-end API is still a relatively easy job, devising a single and general enough data model that suits all possible clients is a much harder job. The flexibility of the read API is a winning factor today because you never know which client applications your API will ever face.
Presentation
IQueryable
API
Figure 2 Queryable Objects Are Queried at the Last Minute
Queryable Services
In the latest edition of the book, “Microsoft .NET: Architecting Applications for the Enterprise” (Microsoft Press, 2014), Andrea Saltarello and I formalized a concept we call layered expression trees (LET). The idea behind LET is application layer and domain services exchange IQueryable objects whenever possible. This usually happens whenever the layers live in the same process space and don’t require serialization. By exchanging IQueryable, you can defer any required query results from filter composition and data projection to the last minute. You can have these adapted on a per-application basis, instead of some hardcoded form of domain-level API. The idea of LET goes hand-in-hand with the emerging CQRS pattern. This pushes the separation of the read stack from the command stack. Figure 2 illustrates the points of having an LET pattern and a bunch of queryable services in your architecture. The primary benefit of LET is you don’t need any DTOs to carry data across layers. You still need to have view model classes at some point, but that’s a different story. You have to deal with view model classes as long as you have a UI to fill out with data. View model classes express the desired data layout your users expect. That’s the only set of DTO classes you’re going to have. Everything else from the level at which you physically query for data and up is served through IQueryable references. Another benefit of LET and queryable services is the resulting queries are application-level queries. Their logic closely follows domain expert language. This makes it easier to map requirements to code and discuss alleged bugs or misunderstandings with customers. Most of the time, a quick look at the code helps you explain the logic. As an example, here’s what an LET query may look like: var model = from i in db.Invoices .ForBusinessUnit(buId) .UnpaidInLast(30.Days) orderby i.PaymentDueDate select new UnpaidViewModel { ... };
From the database context of an Entity Framework root object, you query for all inbound invoices and select those relevant to a given business unit. Among those, you find those still unpaid a number of days past due terms. The nice thing is IQueryable references are not real data. The query executes against the data source only when you exchange 8 msdn magazine
0415msdn_EspositoCEdge_v5_6-9.indd 8
IQueryable for some IList. Filters you add along the way are simply WHERE clauses added to the actual query being run at some point. As long as you’re in the same process space, the amount of data transferred and held in memory is the bare minimum. How does this affect scalability? The emerging trend for which the vNext platform has been optimized is you keep your Web back end as compact as possible. It will ideally be a single tier. You achieve scalability by replicating the unique tier through a variety of Microsoft Azure Web Roles. Having a single tier for the Web back end lets you use IQueryable everywhere and save yourself a bunch of DTO classes.
Implement Queryable Services
In the previous code snippet, I assumed your services are implemented as a layer around some Entity Framework database context. That’s just an example, though. You can also fully encapsulate the actual data provider under an ASP.NET Web API façade. That way, you have the benefit of an API that expresses the domain services capabilities and can still reach over HTTP, thus decoupling clients from a specific platform and technology. Then you can create a Web API class library and host it in some ASP.NET MVC site, Windows service or even some custom host application. In the Web API project, you create controller classes that derive from ApiController and expose methods that return IQueryable. Finally, you decorate each IQueryable method with the EnableQuery attribute. The now obsolete Queryable attribute works, as well. The key factor here is the EnableQuery attribute lets you append OData queries to the requested URL— something like this: [EnableQuery] public IQueryable Get() { return (from c in db.Customers select c); }
When you create an API service, you build it from an existing data model and expose it to the outside world through the same native or custom model. This basic piece of code represents the beating heart of your API. It returns no data by itself. It lets clients shape up any return data they want. Look at the code in Figure 3 and consider it to be code in a client application. The $select convention in the URL determines the data projection the client receives. You can use the power of OData query syntax to shape up the query. For more on this, see bit.ly/15PVBXv. For example, one client may only request a small subset of columns. Another client or a different screen in the same client may Cutting Edge
3/11/15 10:41 AM
Figure 3 Client Apps Can Shape the Data Returned var api = "api/customers?$select=LastName"; var request = new HttpRequestMessage() { RequestUri = new Uri(api)), Method = HttpMethod.Get, }; var client = new HttpClient(); var response = client.SendAsync(request).Result; if (response.IsSuccessStatusCode) { var list = await response.Content.ReadAsAsync(); // Build view model object here }
query a larger chunk of data. All this can happen without touching the API and creating tons of DTOs along the way. All you need is an OData queryable Web API service and the final view model classes to be consumed. Transferred data is kept to a minimum as only filtered fields are returned. There are a couple of remarkable aspects to this. First, OData is a verbose protocol and couldn’t otherwise be given the role it plays. This means when you apply a $select projection, the JSON payload will still list all fields in the original IQueryable (all fields of the original Customer class in the Get method). However, only the specified fields will hold a value. Another point to consider is case sensitivity. This affects the $filter query element you use to add a WHERE clause to the query. You might want to call tolower or toupper OData functions (if supported by the OData client library you’re using) to normalize comparisons between strings.
Wrapping Up
Quite frankly, I never felt OData was worthy of serious consideration until I found myself—as the owner of a back-end API—in the middle of a storm of requests for different DTOs being returned from the same data model. Every request seemed legitimate and placed for a very noble cause—performance improvement. At some point, it seemed all those clients wanted to do was “query” the back end in much the same way they would query a regular database table. Then I updated the back-end service to expose OData endpoints, thus giving each client the flexibility to download only the fields in which they were interested. The type T of each IQueryable method is the key. It may or may not be the same type T you have in the physical model. It can match a plain database table or result from data aggregations done on the server-side and transparent to clients. When you apply OData, though, you let clients query on datasets of a known single entity, T, so why not give it a try? n Dino Esposito is the coauthor of “Microsoft .NET: Architecting Applications for
the Enterprise” (Microsoft Press, 2014) and “Programming ASP.NET MVC 5” (Microsoft Press, 2014). A technical evangelist for the Microsoft .NET Framework and Android platforms at JetBrains and a frequent speaker at industry events worldwide, Esposito shares his vision of software at software2cents.wordpress.com and on Twitter at twitter.com/despos.
thanks to the following technical expert for reviewing this article: Jon Arne Saeteras
,QVWDQWO\6HDUFK 7HUDE\WHVRI7H[W ILHOGHGDQGIXOOWH[WVHDUFKW\SHV GW6HDUFK·VRZQGRFXPHQWILOWHUV VXSSRUW´2IILFHµ3')+70/;0/=,3 HPDLOVZLWKQHVWHGDWWDFKPHQWV DQG PDQ\RWKHUILOHW\SHV 6XSSRUWVGDWDEDVHVDVZHOODVVWDWLF DQGG\QDPLFZHEVLWHV +LJKOLJKWVKLWVLQDOO RIWKHDERYH
6HH 06$]XUH WXWRULDODW
$3,VLQFOXGLQJELW IRU ZZZGWVHDUFKFRP 1(7-DYD&64/HWF
D]XUH
´OLJKWQLQJIDVWµ5HGPRQG0DJD]LQH ´FRYHUVDOOGDWDVRXUFHVµH:HHN ´UHVXOWVLQOHVVWKDQDVHFRQGµ,QIR:RUOG KXQGUHGVPRUHUHYLHZVDQGGHYHORSHU FDVHVWXGLHVDWZZZGWVHDUFKFRP GW6HDUFKSURGXFWV :HEZLWK6SLGHU
'HVNWRSZLWK6SLGHU
(QJLQHIRU:LQ 1(76'.
1HWZRUNZLWK6SLGHU
(QJLQHIRU/LQX[6'.
3XEOLVKSRUWDEOHPHGLD (QJLQHIRU$QGURLG6'. beta 'RFXPHQW)LOWHUV²LQFOXGHGZLWKDOOSURGXFWVDQG DOVRDYDLODEOHIRUVHSDUDWHOLFHQVLQJ
$VNDERXWIXOO\IXQFWLRQDOHYDOXDWLRQV 7KH6PDUW&KRLFHIRU7H[W5HWULHYDO VLQFH
ZZZGW6HDUFKFRP,7),1'6
msdnmagazine.com
0415msdn_EspositoCEdge_v5_6-9.indd 9
3/11/15 10:41 AM
WindoWs With C++
KENNY KERR
Visual C++ 2015 Brings Modern C++ to Legacy Code Systems programming with Windows relies heavily on opaque handles that represent objects hidden behind C-style APIs. Unless you’re programming at a fairly high level, chances are you’ll be in the business of managing handles of various kinds. The concept of a handle exists in many libraries and platforms and is certainly not unique to the Windows OS. I first wrote about a smart handle class template back in 2011 (msdn.microsoft.com/ magazine/hh288076) when Visual C++ started introducing some initial C++11 language features. Visual C++ 2010 made it possible to write convenient and semantically correct handle wrappers, but its support for C++11 was minimal and a lot of effort was still required to write such a class correctly. With the introduction of Visual C++ 2015 this year, I thought I’d revisit this topic and share some more ideas about how to use modern C++ to liven up some old C-style libraries. The best libraries don’t allocate any resources and, thus, require minimal wrapping. My favorite example is the Windows slim reader/writer (SRW) lock. Here’s all it takes to create and initialize an SRW lock that’s ready to use: SRWLOCK lock = {};
The SRW lock structure contains just a single void * pointer and there’s nothing to clean up! It must be initialized prior to use and the only limitation is that it can’t be moved or copied. Obviously, any modernization has more to do with exception safety while the lock is held, rather than resource management. Still, modern C++ can help to ensure these simple requirements are met. First, I can use the ability to initialize non-static data members where they’re declared to prepare the SRW lock for use: class Lock { SRWLOCK m_lock = {}; };
That takes care of initialization, but the lock can still be copied and moved. For that I need to delete the default copy constructor and copy assignment operator: class Lock { SRWLOCK m_lock = {}; public: Lock(Lock const &) = delete; Lock & operator=(Lock const &) = delete;
};
This prevents both copies and moves. Declaring these in the public part of the class tends to produce better compiler error messages. Of course, now I need to provide a default constructor because one is no longer assumed:
class Lock { SRWLOCK m_lock = {}; public: Lock() noexcept = default; Lock(Lock const &) = delete; Lock & operator=(Lock const &) = delete;
};
Despite the fact that I haven’t written any code, per se—the compiler is generating everything for me—I can now create a lock quite simply: Lock lock;
The compiler will forbid any attempts to copy or move the lock: Lock lock2 = lock; // Error: no copy! Lock lock3 = std::move(lock); // Error: no move!
I can then simply add methods for acquiring and releasing the lock in various ways. The SRW lock, as its name suggests, provides both shared reader and exclusive writer locking semantics. Figure 1 provides a minimal set of methods for simple exclusive locking. Check out “The Evolution of Synchronization in Windows and C++” (msdn.microsoft.com/ magazine/jj721588) for more information on the magic behind this incredible little locking primitive. All that remains is to provide a bit of exception safety around lock ownership. I certainly wouldn’t want to write something like this: lock.Enter(); // Protected code lock.Exit();
Instead, I’d like a lock guard to take care of acquiring and releasing the lock for a given scope: Lock lock; { }
LockGuard guard(lock); // Protected code
Such a lock guard can simply hold onto a reference to the underlying lock: class LockGuard { Lock & m_lock; };
Like the Lock class itself, it’s best that the guard class not allow copies or moves, either: class LockGuard { Lock & m_lock; public: LockGuard(LockGuard const &) = delete; LockGuard & operator=(LockGuard const &) = delete;
};
10 msdn magazine
0415msdn_KerrCPP_v4_10-14.indd 10
3/11/15 8:27 AM
Untitled-3 1
3/9/15 4:52 PM
Figure 2 A Simple Lock Guard All that remains is for a constructor to Figure 1 A Simple and Efficient SRW Lock enter the lock and the destructor to exit class Lock class LockGuard { { the lock. Figure 2 wraps up that example. SRWLOCK m_lock = {}; Lock & m_lock; To be fair, the Windows SRW lock is public: public: quite a unique little gem and most libraries will require a bit of storage or some Lock() noexcept = default; LockGuard(LockGuard const &) = delete; Lock(Lock const &) = delete; LockGuard & operator=(LockGuard const &) = delete; kind of resource that must be explicitly Lock & operator=(Lock const &) = delete; managed. I’ve already shown how best explicit LockGuard(Lock & lock) noexcept : void Enter() noexcept m_lock(lock) to manage COM interface pointers { { in “COM Smart Pointers Revisited” AcquireSRWLockExclusive(&m_lock); m_lock.Enter(); } } (msdn.microsoft.com/magazine/dn904668), so void Exit() noexcept ~LockGuard() noexcept now I’ll focus on the more general case { { of opaque handles. As I wrote previReleaseSRWLockExclusive(&m_lock); m_lock.Exit(); } } ously, a handle class template must }; }; provide a way to parameterize not only the type of the handle, but also the way in which the handle is closed, and even what exactly represents an The Windows CreateThreadpoolWork function returns a PTP_ invalid handle. Not all libraries use a null or zero value to represent WORK handle to represent the work object. This is just an opaque invalid handles. My original handle class template assumed the pointer and a nullptr value is naturally returned on failure. As caller would provide a handle traits class that gives the necessary a result, a traits class for work objects can take advantage of the semantics and type information. Having written many, many HandleTraits class template, which saves me a bit of typing: struct ThreadPoolWorkTraits : HandleTraits traits classes in the intervening years, I came to realize that the vast { majority of these follow a similar pattern. And, as any C++ developer static void Close(Type value) noexcept { will tell you, patterns are what templates are adept at describing. CloseThreadpoolWork(value); So, along with a handle class template, I now employ a handle traits } }; class template. The handle traits class template isn’t required, but So what does the actual Handle class template look like? Well, it does simplify most definitions. Here’s its definition: template can simply rely on the given traits class, infer the type of the hanstruct HandleTraits dle and call the Close method, as needed. The inference takes the { using Type = T; form of a decltype expression to determine the type of the handle: static Type Invalid() noexcept { return nullptr; } // Static void Close(Type value) noexcept;
};
Notice what the HandleTraits class template provides and what it specifically does not provide. I’ve written so many Invalid methods that returned nullptr values that this seemed like an obvious default. On the other hand, each concrete traits class must provide its own Close method for obvious reasons. The comment lives on merely as a pattern to follow. The type alias is likewise optional and is merely a convenience for defining my own traits classes derived from this template. Therefore, I can define a traits class for file handles returned by the Windows CreateFile function, as shown here: struct FileTraits { static HANDLE Invalid() noexcept { return INVALID_HANDLE_VALUE; } static void Close(HANDLE value) noexcept { VERIFY(CloseHandle(value)); }
};
The CreateFile function returns the INVALID_HANDLE_VALUE value if the function fails. Otherwise, the resulting handle must be closed using the CloseHandle function. This is admittedly unusual. 12 msdn magazine
0415msdn_KerrCPP_v4_10-14.indd 12
template class Handle { using Type = decltype(Traits::Invalid()); Type m_value;
};
This approach keeps the author of the traits class from having to include a type alias or typedef to provide the type explicitly and redundantly. Closing the handle is the first order of business and a safe Close helper method is tucked into the private part of the Handle class template: void Close() noexcept { if (*this) // operator bool { Traits::Close(m_value); } }
This Close method relies on an explicit Boolean operator to determine whether the handle needs to be closed before calling the traits class to actually perform the operation. The public explicit Boolean operator is another improvement over my handle class template from 2011 in that it can simply be implemented as an explicit conversion operator: explicit operator bool() const noexcept { return m_value != Traits::Invalid(); }
This solves all kinds of problems and is certainly a lot easier to define than traditional approaches that implement a Boolean-like operator while avoiding the dreaded implicit conversions the Windows with C++
3/11/15 8:27 AM
Untitled-3 1
3/9/15 4:52 PM
compiler would otherwise permit. Another language improvement, which I’ve already made use of in this article, is the ability to explicitly delete special members, and I’ll do that now for the copy constructor and copy assignment operator: Handle(Handle const &) = delete; Handle & operator=(Handle const &) = delete;
A default constructor can rely on the traits class to initialize the handle in a predictable manner: explicit Handle(Type value = Traits::Invalid()) noexcept : m_value(value) {}
And the destructor can simply rely on the Close helper: ~Handle() noexcept { Close(); }
So, copies aren’t permitted, but apart from the SRW lock, I can’t think of a handle resource that doesn’t permit moving its handle around in memory. The ability to move handles is tremendously convenient. Moving handles involves two individual operations that I might refer to as detach and attach, or perhaps detach and reset. Detach involves releasing the ownership of the handle to the caller: Type Detach() noexcept { Type value = m_value; m_value = Traits::Invalid(); return value; }
The handle value is returned to the caller and the handle object’s copy is invalidated to ensure that its destructor doesn’t call the Close method provided by the traits class. The complementary attach or reset operation involves closing any existing handle and then assuming ownership of a new handle value: bool Reset(Type value = Traits::Invalid()) noexcept { Close(); m_value = value; return static_cast(*this); }
The Reset method defaults to the handle’s invalid value and becomes a simple way to close a handle prematurely. It also returns the result of the explicit Boolean operator as a convenience. I found myself writing the following pattern quite regularly: work.Reset(CreateThreadpoolWork( ... )); if (work) { // Work object created successfully }
Here, I’m relying on the explicit Boolean operator to check the validity of the handle after the fact. Being able to condense this into a single expression can be quite convenient: if (work.Reset(CreateThreadpoolWork( ... ))) { // Work object created successfully }
Now that I have this handshake in place I can implement the move operations quite simply, beginning with the move constructor: Handle(Handle && other) noexcept : m_value(other.Detach()) {}
The Detach method is called on the rvalue reference and the newly constructed Handle effectively steals ownership away from the other Handle object. The move assignment operator is only slightly more complicated: 14 msdn magazine
0415msdn_KerrCPP_v4_10-14.indd 14
Handle & operator=(Handle && other) noexcept { if (this != &other) { Reset(other.Detach()); } }
return *this;
An identity check is first performed to avoid attaching a closed handle. The underlying Reset method doesn’t bother to perform this kind of check as that would involve two additional branches for every move assignment. One is prudent. Two is redundant. Of course, move semantics are grand, but swap semantics are even better, especially if you’ll be storing handles in standard containers: void Swap(Handle & other) noexcept { Type temp = m_value; m_value = other.m_value; other.m_value = temp; }
Naturally, a non-member and lowercase swap function is required for genericity: template void swap(Handle & left, Handle & right) noexcept { left.Swap(right); }
The final touches to the Handle class template come in the form of a pair of Get and Set methods. Get is obvious: Type Get() const noexcept { return m_value; }
It simply returns the underlying handle value, which may be needed to pass to various library functions. The Set is perhaps less obvious: Type * Set() noexcept { ASSERT(!*this); return &m_value; }
This is an indirect set operation. The assertion underscores this fact. I have in the past called this GetAddressOf, but that name disguises or contradicts its true purpose. Such an indirect set operation is needed in cases where the library returns a handle as an out parameter. The WindowsCreateString function is just one example out of many: HSTRING string = nullptr; HRESULT hr = WindowsCreateString( ... , &string);
I could call WindowsCreateString in this way and then attach the resulting handle to a Handle object, or I can simply use the Set method to assume ownership directly: Handle string; HRESULT hr = WindowsCreateString( ... , string.Set());
That’s much more reliable and clearly states the direction in which the data is flowing. The Handle class template also provides the usual comparison operators, but thanks to the language’s support for explicit conversion operators, these are no longer necessary for avoiding implicit conversion. They just come in handy, but I’ll leave that for you to explore. The Handle class template is just another example from Modern C++ for the Windows Runtime (moderncpp.com). n Kenny Kerr is a computer programmer based in Canada, as well as an author
for Pluralsight and a Microsoft MVP. He blogs at kennykerr.ca and you can follow him on Twitter at twitter.com/kennykerr. Windows with C++
3/11/15 8:27 AM
Untitled-1 1
10/13/11 11:25 AM
Data Points
JULIE LERMAN
EF6 Code First Migrations for Multiple Models Entity Framework 6 introduced support for Code First Migrations to better handle storing data for multiple models in a single data base. But the support is very specific and may not be what you imagine. In this article, you’ll learn about this feature, what it does and doesn’t do, and see how to use it.
Distinct Models, Distinct Tables, Distinct Data: Same Database
EF6 migrations supports migration of multiple models that are completely independent of one another. Both implementations of this feature—using a key to identify a set of migrations or using database schema to group migration histories with the tables for a single model—allow you to store separate, distinct models in the same database.
Not for Sharing Data Across Models or for Multi-Tenant Databases
It’s easy to misinterpret the benefits of this feature, so I want to be clear right away on what is not supported. This new multimodel support isn’t designed to replicate a single model and across multiple schemas to have a multitenant database.
EF6 migrations supports migration of multiple models that are completely independent of one another. The other pattern many of us hope for when we see this feature is the ability to share a common entity (and its data) across multiple models and map the entity to a single database. That’s a very differ ent kind of problem, however, and not one that’s easily solved with Entity Framework. I used to try but gave up. I’ve written previous articles on sharing data across databases in this column (see “A Pattern for Sharing Data Across DomainDriven Design Bounded Contexts, Part 2” at bit.ly/1817XNT). I also presented a session at TechEd Europe called “Entity Framework Model Partitioning in DomainDriven Design Bounded Contexts,” which was recorded and is available at bit.ly/1AI6xPa.
Pattern One: ContextKey Is the Key
One of the new tools EF6 provides to enable this feature is the ContextKey. This is a new field in the MigrationHistory table of the database that keeps track of every migration. It’s partnered with a new property of the same name in the DbMigrations Configuration class. By default, the ContextKey will inherit the strongly typed name of the DbMigrationsConfiguration associated with that context. As an example, here’s a DbContext class that works with Doctor and Episode types: namespace ModelOne.Context { public class ModelOneContext:DbContext { public DbSet Doctors { get; set; } public DbSet Episodes { get; set; } } }
As always, the default behavior of the enablemigrations com mand is to create a DbMigrationsConfiguration class that has the name [YourNamespace].Migrations.Configuration. When I apply a particular migration (that is, when I call Update Database in the Visual Studio Package Manager Console), Entity Framework will not only apply the migration, it will also add a new row to the __MigrationHistory table. Here’s the SQL for that action: INSERT [dbo].[__MigrationHistory]([MigrationId], [ContextKey], [Model], [ProductVersion]) VALUES (N'201501131737236_InitialModelOne', N'ModelOne.Context.Migrations.Configuration', [hash of the model], N'6.1.2-31219')
Notice that the value going into the ContextKey field is Model One.Context.Migrations.Configuration, which is the strongly typed name of my DbMigrationsConfiguration class. You can control the ContextKey name by specifying the Context Key property of the DbMigrationsConfiguration class in the class constructor. I’ll rename it to ModelOne: public Configuration() { AutomaticMigrationsEnabled = false; ContextKey = "ModelOne"; }
Now executing migrations will use ModelOne for the Context Key field of the Migration table. However, if you’ve already executed migrations with the default, this will not go well. EF will attempt to reapply all of the migrations, including those that created tables and other database objects, causing the database to throw errors because of the duplicate objects. So my advice is to change that value prior to applying any migrations, otherwise you’ll
16 msdn magazine
0415msdn_LermanDPts_v4_16-18.indd 16
3/11/15 8:28 AM
have to manually update the data in the __MigrationsHistory table. I’ve made sure my DbContext type points to a connection string that I’ve named MultipleModelDb. Rather than rely on the Code First convention to locate a connection string with the same name as the context, I want to have a single connection string I can use for any model that targets this database. I did this by specifying that the context con structor inherit the DbContext overload, which takes a connection string name. Here’s the constructor for ModelOneContext:
DbContext models that are stored in the database. But remember, it’s only able to work because there’s no overlap with respect to the database tables to which the two models map.
Pattern Two: Database Schemas Separate Models and Migrations
The other pattern you can use to allow migra tions to work with multiple models in a single Figure 1 Migrations and Tables database is to separate the migrations and the Grouped into Database Schemas relevant tables with database schemas. This is as close to simply targeting separate databases as you can get, without some of public ModelOneContext() the resource overhead (such as mainte : base("MultipleModelDb") { } nance and expense) you might incur with multiple databases. Both addmigration and updatedata EF6 makes it much easier to define base will be able to find the connection a database schema for a single model string, so I’m assured of migrating the cor by configuring it with a new DbModel rect database. Builder.HasSchema mapping. This will override the default schema name, which, Two Contexts, Two ContextKeys Figure 2 Placing Multiple DbContext for SQL Server, is dbo. Now that you see how the ContextKey Classes in a Single Project Remember that even if you don’t spec works, let’s add in another model with its ify the context key, a default name will be own ContextKey. I put this model in a separate project. The pattern for doing this when you have mul used. So there’s no point in removing the context key properties I tiple models in the same project is a bit different; I’ll demonstrate set to demonstrate how the HasSchema property affects migrations. I’ll set the schema for each of my two context classes in the On that further along in this article. Here’s my new model, ModelTwo: namespace ModelTwo.Context ModelCreating method. Here’s the relevant code for ModelTwo { Context, which I’ve specified to have a schema named ModelTwo: public class ModelTwoContext:DbContext {
}
}
public DbSet BbcEmployees { get; set; } public DbSet HiringHistories { get; set; }
ModelTwoContext works with completely different domain classes. Here’s its DbConfiguration class, where I specified that the ContextKey be called ModelTwo: internal sealed class Configuration : DbMigrationsConfiguration { public Configuration() { AutomaticMigrationsEnabled = false; ContextKey = "ModelTwo"; }
When I call updatedatabase against the project that contains ModelTwoContext, the new tables are created in the same database and a new row is added to the __MigrationHistory table. This time the ContextKey value is ModelTwo, as you can see in the snippet of SQL that was run by the migration: INSERT [dbo].[__MigrationHistory]([MigrationId], [ContextKey], [Model], [ProductVersion]) VALUES (N'201501132001186_InitialModelTwo', N'ModelTwo', [hash of the model], N'6.1.2-31219')
As I evolve my domain, my DbContext and my database, EF Migrations will always check back to the relevant set of executed migrations in the __MigrationHistory table using the appropriate ContextKey. That way it will be able to determine what changes to make to the database given the changes I made to the model. This allows EF to correctly manage the migrations for multiple msdnmagazine.com
0415msdn_LermanDPts_v4_16-18.indd 17
protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.HasDefaultSchema("ModelTwo"); }
The other context will get the schema name ModelOne. The result is that all of the database objects to which the Model TwoContext maps will be in the ModelTwo schema. Additionally, EF will put the __MigrationHistory table for this model in the ModelTwo schema, as well. To demo this in a clean way, I’m pointing to a different database than I did in the previous examples and applying all of the migra tions. Keep in mind that setting the HasDefaultSchema method is a mapping change and requires that you add a new migration to apply that change to the database. Figure 1 shows the migration and data tables in their separate schemas. Going forward, whenever you interact with migrations for either context, because they’re relegated to their individual schemas, EF won’t have a problem maintaining them separately. As a reminder, pay attention to the critical pattern here, which is that there’s no overlap with the tables mapped to the two models.
Multiple Models in a Single Project
The two examples you’ve seen thus far—using the ContextKey or the database schema to separate the model migrations—were set up with each model encapsulated in its own project. This is the way I prefer to architect my solutions. But it’s also possible and, in many cases, completely reasonable to have your models in the same April 2015 17
3/11/15 8:28 AM
project. Whether you use the Context HasDefaultSchema mapping method in Key or the database schema to keep the my DbContext classes, so the migration migrations sorted out, you can achieve history tables and other database objects this with the addition of only a few extra will be housed in their own schemas. parameters to the NuGet commands. Now it’s time to add migrations for For clean separation of these exam both models and apply them to my data ples, I’ll create a new solution with the base. Again, I have to direct EF to which same classes. I’ll keep the domain classes model and migration to use. By pointing in separate projects, but both models in to the migration configuration file, EF will the same project, as shown in Figure 2. know which model to use and in which As you know, by default, enable directory to store the migration files. migrations will create a folder called Here’s my command for adding a Migrations for a discovered DbContext migration for ContextOne (remember in your solution. If you have multiple Db Figure 3 Result of Specifying the DbContext and that I changed the configuration class Contexts as I do now, enablemigrations Directory Name When Enabling Migrations name so I don’t have to use its fully quali will not just randomly select a DbContext fied name for ConfigurationTypeName): add-migration Initial for creating migrations; instead, it will return a very helpful -ConfigurationTypeName ModelOneDbConfig message instructing you to use the ContextTypeName parameter The resulting migration file gets created in the ModelOne to indicate which DbContext to use. The message is so nice that Migrations directory. After I do the same for ModelTwo, I also have you can just copy and paste from the message to run the necessary a migration file in the ModelTwoMigrations directory. commands. Here’s the message returned for my project: Now it’s time to apply these migrations. I’ll need to specify the PM> enable-migrations ConfigurationTypeName again so that EF knows which migration More than one context type was found in the assembly 'ModelOne.Context'. To enable migrations for 'ModelOne.Context.ModelOneContext', use to use. Here’s the command for ModelOne: Enable-Migrations -ContextTypeName ModelOne.Context.ModelOneContext. To enable migrations for 'ModelTwo.Context.ModelTwoContext', use Enable-Migrations -ContextTypeName ModelTwo.Context.ModelTwoContext.
In addition to the –ContextTypeName parameter, I’ll add in the MigrationsDirctory parameter to explicitly name the folder to make it easier for me to manage the project assets: Enable-Migrations -ContextTypeName ModelOne.Context.ModelOneContext -MigrationsDirectory ModelOneMigrations
Figure 3 shows the new folders with their Configuration classes that were created for each migration. Running EnableMigrations also adds the code to the DbCon figuration classes, which makes them aware of the directory name. Here’s the configuration class for ModelOneContext as an example (the DbConfiguraton file for ModelTwoContext will set its directory name to ModelTwoMigrations, as designated): internal sealed class Configuration : DbMigrationsConfiguration { public Configuration() { AutomaticMigrationsEnabled = false; MigrationsDirectory = @"ModelOneMigrations"; }
Because I now have two classes named Configuration, I’ll be forced to fully qualify them any time I want to use them. So I’ll rename the first to ModelOneDbConfig (as shown in the following code) and the second to ModelTwoDbConfig: internal sealed class ModelOneDbConfig : DbMigrationsConfiguration { public ModelOneDbConfig() { AutomaticMigrationsEnabled = false; MigrationsDirectory = @"ModelOneMigrations"; } }
You can also specify a ContextKey if you want to override the default, but I’ll leave that alone. Remember that I did specify the 18 msdn magazine
0415msdn_LermanDPts_v4_16-18.indd 18
update-database -ConfigurationTypeName ModelOneDbConfig
I’ll run that and then the relevant command for ModelTwo: update-database -ConfigurationTypeName ModelTwoDbConfig
After running these commands, my database looks just the same as it did in Figure 1. As I modify my models and add and apply migrations, I just need to remember to specify the correct configuration class as a parameter in each of the commands.
Nice Fit with Domain-Driven Design Modeling
In a recent twopart Data Points column called “A Pattern for Sharing Data Across DomainDriven Design Bounded Contexts,” I wrote about sharing data across domains that are persisted to separate databases. Part One is at bit.ly/1wolxz2 and Part Two is at bit.ly/1817XNT. A number of developers have pointed out that maintaining a sepa rate database onpremises can be a burden and paying for separate databases that are hosted in the cloud can be expensive. The tech niques you learned in this article for hosting the tables and data for multiple models in a single database can help you to emulate complete database separation. This new support in EF6 migrations provides a nice solution for those developers. n Julie lerman is a Microsoft MVP, .NET mentor and consultant who lives in the
hills of Vermont. You can find her presenting on data access and other.NET topics at user groups and conferences around the world. She blogs at thedatafarm.com/ blog and is the author of “Programming Entity Framework” (2010), as well as a Code First edition (2011) and a DbContext edition (2012), all from O’Reilly Media. Follow her on Twitter at twitter.com/julielerman and see her Pluralsight courses at juliel.me/PS-Videos.
Thanks to the following Microsoft technical expert for reviewing this article: Rowan Miller
Data Points
3/11/15 8:28 AM
Untitled-6 1
5/28/14 4:02 PM
MICROSOFT AZURE
Azure Notification Hubs: Best Practices for Managing Devices Sara Silva The mobile applications market is growing faster and the other hand, only a key/value message is sent to the device and faster, and improving the UX of any application is crucial because it increases user loyalty. One of the most important features of modern applications is that they can be kept alive, which means keeping the user informed about the latest events that have occurred in the application, even when it’s not being used. This is possible through push notifications. Each mobile platform has its own push notification service (PNS) responsible for pushing the notifications—short messages—to the device. Windows applications allow apps to receive different push notification types that represent different ways to display the message: toast, tile, raw and badge. For Android applications, on This article discusses: • The push notification lifecycle • Using Notification Hubs • The two main patterns and two models for managing devices in Notification Hubs • Using Notification Hubs in Azure Mobile Services
Technologies discussed: Microsoft Azure, Azure Notificaton Hubs
Code download available at: github.com/saramgsilva/NotificationHubs
the layout is defined in the application by the class responsible for managing push notifications. In Apple iOS applications, the process is a mix of these approaches.
Azure Notification Hubs is a Microsoft Azure service that provides an easy-to-use infrastructure for sending push notifications from any back end to any mobile platform. The PNS delivers the notifications, though each application needs a back end or a Web or desktop application to define the message and to connect to the push notification provider to send it. Azure Notification Hubs is a Microsoft Azure service that provides an easy-to-use infrastructure for sending push notifications from any back end to any mobile platform. Actually, there are two main patterns and two models for managing devices in Notification Hubs. In this article I’ll show how each pattern should
20 msdn magazine
0415msdn_SilvaHubs_v3_20-28.indd 20
3/11/15 8:19 AM
Figure 1 Generic Templates var toast = new XElement("toast", new XElement("visual", new XElement("binding", new XAttribute("template", "ToastText01"), new XElement("text", new XAttribute("id", "1"), "$(message)")))).ToString(SaveOptions.DisableFormatting); var alert = new JObject( new JProperty("aps", new JObject(new JProperty("alert", "$(message)"))), new JProperty("inAppMessage", notificationText)) .ToString(Newtonsoft.Json.Formatting.None); var payload = new JObject( new JProperty("data", new JObject(new JProperty("message", "$(message)")))) .ToString(Newtonsoft.Json.Formatting.None);
be used; discuss the advantages, disadvantages and possible scenarios for each; and describe the different models that can be used. I’ll focus on best practices for sending cross-platform and customized push notifications using Notification Hubs, and also show how Notification Hubs is integrated into Azure Mobile Services.
The Push Notification Lifecycle
The push notification lifecycle consists of three main steps: 1. The application makes a request for the push notification service handle, which can be a token, channel URI or a registrationId, depending on the mobile platform. 2. The application sends the PNS handle to the back end to store it. 3. The back end sends a request to the PNS, which then delivers the push notification. Conceptually this process is very simple, but in practice it’s not so easy because the infrastructure required to implement this flow is complex. If an application is provided for different client platforms, it requires an implementation for each one (in reality, it will represent an implementation for each interface provided by each platform). Moreover, the format for each push notification has its own platform specifications and, in the end, these can be hard to maintain. And that’s not all: The back end needs to be able to scale some push notification services that don’t support broadcasting to multiple devices; to target push notifications to different interest groups; and monitor the push notifications for delivery status. These issues must be handled in the back end and, consequently, this requires a complex infrastructure.
Using Notification Hubs
As I noted, Azure Notification Hubs makes it easy to send mobile push notifications from any back end to any mobile platform, and it supports sending push notifications for different interest groups and providing monitoring, telemetry and scheduling of push notifications. In this way, Notification Hubs is a msdnmagazine.com
0415msdn_SilvaHubs_v3_20-28.indd 21
third-party service that helps send cross-platform and personalized push notifications to applications and implements all the needs of a push notification infrastructure. To integrate Notification Hubs in an application requires configuration in the Azure Portal; the connection between Notification Hubs and the PNS should be configured in the Configuration separator of the respective Notification Hub. Without these configurations, push notifications can’t be sent and an error will occur. (If you’re ever in doubt about which push notification services are supported by Notification Hubs, the Configuration separator is the best place to check.) To use Notification Hubs, you’ll need to understand tags, templates and the different ways Notification Hubs can be used.
In general, a tag represents an interest group, which allows you to send push notifications to specific targets. In general, a tag represents an interest group, which allows you to send push notifications to specific targets. For example, a sports news application could define tags for each sport: cycling, football, tennis and so forth, allowing the user to select what he wants to receive based on his interests. For cases where authentication is required and the focus is the user, the user id can be used as the tag. In practice, a tag is no more than a simple string value (like “cycling,” “football” or “tennis”), and it’s also useful for localization (for different languages there’s a pattern like “en_cycling” or “pt_cycling,” which represents tags in English or Portuguese, respectively). Templates aren’t a new concept, but Notification Hubs creates an abstraction that allows you to define either platform-specific templates or generic templates, which means you can specify key/ value pairs for each push notification defined when the device is registered. The registration will then provide a native notification (toast, payload or message) that contains expressions (such as $(message)) with a value the back end or application will define when the push notification is sent.
Application or Windows
Notification Hub Push Notification Service Back End
iOS
$(message)
aps: { alert: “$(message)” }
Devices
Figure 2 Sending a Platform-Independent Message April 2015 21
3/11/15 8:19 AM
Figure 1 shows examples of the generic template (for Windows, iOS and Android) using expressions. The back end or application will fill the value when it sends the key/value (Figure 2 illustrates the process): {"message", "Message here!"}
Registering Devices
In mobile development using Notification Hubs, there are two patterns for managing devices and sending push notifications, which can use two different models. In general, the patterns can be described as follows: Case 1: Devices connect directly to Notification Hubs • The client application connects directly to Notification Hubs to register the device. • The back end or a Web or desktop application connects to Notification Hubs to send the push notification. • The PNS will deliver the mobile push notification. Case 2: The back end manages devices in Notification Hubs • The client application connects to the back end. • The back end connects with Notification Hubs to register the devices. • The back end defines the push notification, which will be sent by Notification Hubs. • The PNS will distribute the mobile push notification. Both patterns allow you to use either the Registration model or the Installation model; these models describe the way a device sends the required information to the notification hub. The Installation model was introduced recently and is recommended for new applications or even for current applications. This new model doesn’t make the Registration model obsolete. Here’s a general description of each model: Registration model: In this model, the application sends a registration request to the Notification Hub, providing the PNS handler and tags, and the hub returns a registration id. For each registration you can choose a native template or a generic template, which will define the message.
In mobile development using Notification Hubs, there are two patterns for managing devices and sending push notifications, which can use two different models. Installation model: In this model, the application sends an installation request to the Notification Hub, providing all information required for the process: InstallationId (for example, a GUID), tags, PNS handler, templates and secondary templates (for Windows apps). 22 msdn magazine
0415msdn_SilvaHubs_v3_20-28.indd 22
Although both models are currently available for use, the Installation model was introduced for certain technical reasons, including: • The installation model is easier to implement and maintain. • The installation model allows partial updates to modify, add or remove tags, PNS handlers, templates, and so forth, without sending the Installation object; in contrast, the registration model requires the entire registration object. • The registration model introduces the possibility of duplicate registrations in the back end for the same device. • The registration model creates complexities in maintaining the list of registrations. • No matter which pattern you use, the flow will be the same.
The installation model was introduced recently and is recommended for new applications or for adding an event to current applications. The Installation model can be used in any .NET or Java back end with the Notification Hubs .NET and Java SDKs. For client applications, you can use the REST API with this model, until the new NuGet package that will support the installation model is released. You’ll find the current NuGet package with support for the registration model, WindowsAzure.Messaging.Managed, available at bit.ly/1ArIiIK. Now, let’s take a look at the two patterns.
Case 1: Devices Connect Directly to Notification Hubs
I’m going to discuss what happens when devices connect directly to Notification Hubs as this is the pattern most frequently used by developers. To describe this, I’ll use the registration model. Registering and unregistering a device in Notification Hubs: The device requests a PNS handle from the PNS, then it connects to a Notification Hub to register, using the PNS handle. The Notification Hub uses this value in the PNS connection. In practice, for example with Universal Applications, the application starts by requesting the channel from the Windows push notification service (WNS) with the following code: // Get the channel from the application var pushNotificationChannel = await PushNotificationChannelManager. CreatePushNotificationChannelForApplicationAsync();
For Android, Google Cloud Messaging provides a registrationId, and for iOS, the Apple Push Notification Service provides a token. A Notification Hub object should be created using the hub name, defined in the Azure Portal, and the connection string (more specifically, the DefaultListenSharedAccessSignature key): // Create the notification hub object var hub = new NotificationHub(HubName, ConnectionString);
Note that the NotificationHub class is provided by the WindowsAzure.Messaging.Managed NuGet package (bit.ly/1ArIiIK). Microsoft Azure
3/11/15 8:19 AM
Untitled-1 1
12/8/14 11:11 AM
Figure 3 Defining the Native Push Notification // Define template for Windows var toast = new XElement("toast", new XElement("visual", new XElement("binding", new XAttribute("template", "ToastText01"), new XElement("text", new XAttribute("id", "1"), notificationText)))).ToString(SaveOptions.DisableFormatting); // Define template for iOS var alert = new JObject( new JProperty("aps", new JObject(new JProperty("alert", notificationText))), new JProperty("inAppMessage", notificationText)) .ToString(Newtonsoft.Json.Formatting.None); // Define template for Android var payload = new JObject( new JProperty("data", new JObject(new JProperty("message", notificationText))) .ToString(Newtonsoft.Json.Formatting.None);
Now the application can be registered with the Notification Hub: // Register the device in Notification Hubs var result = await hub.RegisterNativeAsync(pushNotificationChannel.Uri, Tags);
When you register a device in Notification Hubs, you don’t have to provide a list of tags, but if the application needs to define interest groups, those tags should be stored in the device to use them in a registration update. Note that the result.RegistrationId received is the id of the registration in Notification Hubs and should not be confused with the registrationId used in Android applications (the id from Google Cloud Messaging). Sending the push notification: The back end (or an application) connects to the Notification Hub to send the push notification, which can be sent to specific tags (or not) using a specific or generic template. In this case, it’s not important who will connect to the Notification Hub, the back end or a Web or desktop application to send the notification. In practice, a NotificationHubClient object should be created and it will require the hub name, defined in the Azure Portal, and the connection string (more specifically, the DefaultFullSharedAccessSignature key): // Create the Notification Hub client object var hub = NotificationHubClient.CreateClientFromConnectionString( HubName, ConnectionString);
Next, the native push notification must be defined, for example, as shown in Figure 3. And then the push notification can be sent: var googleResult = await hub.SendGcmNativeNotificationAsync(payload, tags); var windowsResult = await hub.SendWindowsNativeNotificationAsync(toast, tags; var appleResult = await hub.SendAppleNativeNotificationAsync(alert, tags);
Whenever a mobile push notification is sent, a list of tags can be provided, but this is optional (it depends on the application requirements). Distributing the push notification: To finish the process, the PNS delivers the notification to devices. The service will try to push the notification during a limited period of time, which varies for each service. One of the advantages of this scenario is that it doesn’t require a back end, but there’s a disadvantage when a user uses more than one 24 msdn magazine
0415msdn_SilvaHubs_v3_20-28.indd 24
device: The tags aren’t shared among devices and the user needs to redefine the tags for each one. The mobile application depends on the Notification Hub, and it’s needed each time to update the tags in the application, which can be a problem if the user doesn’t update to the latest version (a common problem with mobile applications). Possible scenarios for this case can include: • The mobile application doesn’t require a back end, such as when push notifications are sent by a desktop application with this capacity or by the back office (the admin Web site). For example, consider an application based on an online tech event’s news feed that has a settings page where users can subscribe to their interests, such as Developer Events, IT Pro Events and so on. Once the interests are selected, the user will receive push notifications for those interests. Instead of requiring a back end to trigger push notifications, a desktop application or back office can talk directly to the Notification Hub whenever a new event is created to trigger push notifications to all devices that have subscribed to the specific interest. • The mobile application uses a back end, but the push notifications, for some reason, are not integrated in the services and are sent by a desktop application or by the back office. An example might be a client that has a service that provides information to show in the application, but doesn’t allow changes to the service (which can be used by different applications). However, to support push notifications, the client can send notifications using the back office, or even a desktop application.
Case 2: The Back End Manages Devices in Notification Hubs
To describe this pattern I’ll use the new approach based on the installation model, introduced recently by the Notification Hubs team. The application connects to the back end: In this case, the application connects to the back end to create or update the installation object, which will be stored locally. The device requests the PNS handle from the PNS and gets the last Installation from the Notification Hub, stored on the device. The device then connects to the back end to create or update the installation, from the device, in the Notification Hub. (A new Installation object will be created the first time, but it should be Figure 4 Defining a WindowsPushMessage Based on the ToastText01 Template public static IPushMessage GetWindowsPushMessageForToastText01(string message) { var payload = new XElement("toast", new XElement("visual", new XElement("binding", new XAttribute("template", "ToastText01"), new XElement("text", new XAttribute("id", "1"), message)))) .ToString(SaveOptions.DisableFormatting);
}
return new WindowsPushMessage { XmlPayload = payload };
Microsoft Azure
3/11/15 8:19 AM
Untitled-1 1
1/6/15 10:47 AM
reused each time the back end connects with the Notification Hub for the same device.) For example, in Universal Applications, the application will start by requesting the channel URI from the WNS, with the following: // Get the channel from the application var pushNotificationChannel = await PushNotificationChannelManager. CreatePushNotificationChannelForApplicationAsync();
Before making the request to the back end to create or update the installation, the Installation object must be defined: // Retrieve installation from local storage and // create new one if it does not exist var installation = SettingsHelper.Installation; if (installation == null) { installation = new Installation { InstallationId = Guid.NewGuid().ToString(), Platform = NotificationPlatform.Wns, PushChannel = pushNotificationChannel.ToString(), Tags = new List {“news", "sports"} }; }
The installation class used in the client application is created from the JSON provided in the documentation. At this point, you can request the registration with code like this: // Create a client to send the HTTP registration request var client = new HttpClient(); var request = new HttpRequestMessage(HttpMethod.Post, new Uri(_registerUri)) { Content = new StringContent(JsonConvert.SerializeObject(installation), Encoding.UTF8, "application/json") }; var response = await client.SendAsync(request);
The back end creates or updates the installation from the device, in the Notification Hub: When the back end receives a request to register the device using the installation model, it connects with the Notification Hub to create or update the installation. For example, in ASP.NET Web API, you can create a NotificationHubController to define the services to manage devices in Notification Hubs. The implementation requires a Notification Hub client object, which can be defined in the constructor, as follows: public class NotificationHubController : ApiController { private readonly NotificationHubClient _notificationHubClient; private const string HubName = ""; private const string ConnectionString = "" public NotificationHubController() { _notificationHubClient = NotificationHubClient.CreateClientFromConnectionString( ConnectionString, HubName); }
Now you can write the method that will create or update the installation, which will receive as input the installation object, like so: [HttpPost] [Route("api/NH/CreateOrUpdate")] public async Task CreateOrUpdateAsync(Installation installation) { // Before creating or updating the installation is possible, // you must change the tags to have secure tags await _notificationHubClient. CreateOrUpdateInstallationAsync(installation); }
Note that the tags can be defined and stored in the back end (in both models). The mobile application isn’t required to store them or even know about them. For example, consider a bank application that defines tags for each account for a client. When an operation is done for an account, a push notification is sent to the 26 msdn magazine
0415msdn_SilvaHubs_v3_20-28.indd 26
device for that account. In this case, the tags must be secure and only the back end will know them. In the Installation model, the installation will be stored in the Notification Hub, which means it can be retrieved based on the InstallationId. The back end defines the push notification, which will be sent by the Notification Hub: The back end is responsible for sending the push notification to the Notification Hub, and you can provide tags and define the template for each platform (when a generic template isn’t defined). The implementation is similar to the scenario in Case 1, where the back end or an application connects to the Notification Hub to send the push notification, so I won’t provide any additional code for this case.
Note that the tags can be defined and stored in the back end (in both models). By the way, in the installation model, it’s possible to send a push notification to a specific installation. This is a new feature only for registrations based on the Installation model. Here’s the code to do that: _notificationHubClient.SendWindowsNativeNotificationAsync( payload, "$InstallationId:{" + installationId + "}");
The push notification service will distribute the push notification: To finish the process, the PNS will deliver the push notifications to the devices, during a limited period. One of the advantages of this case is that tags can be static or dynamic, which means they can change anytime without changing or affecting the mobile applications; tags are secure because each user can only be registered for a tag if he is authenticated; tags can be shared among different devices for the same user; the mobile application is completely independent of Notification Hubs. The disadvantage of this case is the fact the process is more complex than the first case, if the registration model is used. Possible scenarios for this include: • A mobile application that connects to a back end and the tags must be secure. A good example for this case is an application related to bank accounts, which supports push notifications to keep users informed about transactions in their accounts. • A mobile application that connects to a back end and tags must be dynamic. Consider an application that provides information for different music events, using tags to define different events, and for each event users can subscribe to be updated about all related information. The lifecycle for each tag is short and each time a new event is created, a new tag is created for that event. Tags, therefore, are dynamic and can be managed by the back end, keeping the process independent of the mobile application. In both scenarios, tags should be stored in the back end, but that doesn’t mean the mobile client isn’t aware of them. Microsoft Azure
3/11/15 8:19 AM
/update/2015/04 ZZZFRPSRQHQWVRXUFHFRP
Aspose.Total for .NET from $2,449.02
BEST SELLER
Every Aspose .NET component in one package. • Programmatically manage popular file formats including Word, Excel, PowerPoint and PDF • Work with charts, diagrams, images, project plans, emails, barcodes, OCR and OneNote files alongside many more document management features in .NET applications • Common uses also include mail merging, adding barcodes to documents, building dynamic reports on the fly and extracting text from most document types
LEADTOOLS Medical Imaging SDKs V19 from $4,995.00 SRP
NEW RELEASE
Powerful DICOM, PACS and HL7 functionality. • Load, save, edit, annotate and display DICOM Data Sets with support for 2014 specifications • High-level PACS Client and Server components and frameworks • OEM-ready HTML5 Zero-footprint Viewer and DICOM Storage Server apps with source code • Medical-specific image processing functions for enhancing 16-bit grayscale images • Native libraries for .NET, C/C++, HTML5, JavaScript, WinRT, iOS, OS X, Android, Linux & more
GrapeCity ComponentOne Studio 2015 V1 from $1,315.60
NEW RELEASE
.NET Tools for Professional Developers. • Solutions for Data Visualization, Data Management, Reporting and Business Intelligence • Hundreds of .NET UI controls for all Visual Studio platforms • Built-in themes and an array of designers for custom styling • Adaptive, mobile-friendly and touch-enabled controls
Help & Manual Professional from $583.10
BEST SELLER
Easily create documentation for Windows, the Web and iPad. • Powerful features in an easy accessible and intuitive user interface • As easy to use as a word processor, but with all the power of a true WYSIWYG XML editor • Single source, multi-channel publishing with conditional and customized output features • Output to HTML, WebHelp, CHM, PDF, ePUB, RTF, e-book or print • Styles and Templates give you full design control
l%QORQPGPV5QWTEG#NN4KIJVU4GUGTXGF#NNRTKEGUEQTTGEVCVVJGVKOGQHRTGUU1PNKPGRTKEGUOC[XCT[HTQOVJQUGUJQYPFWGVQFCKN[HNWEVWCVKQPUQPNKPGFKUEQWPVU
75*GCFSWCTVGTU %QORQPGPV5QWTEG %NCTGOQTG2TQH9C[ 5WKVG 9QQFUVQEM )# 75#
Untitled-5 1
'WTQRGCP*GCFSWCTVGTU %QORQPGPV5QWTEG )TG[HTKCTU4QCF 4GCFKPI $GTMUJKTG 4)2' 7PKVGF-KPIFQO
#UKC2CEKHKE*GCFSWCTVGTU %QORQPGPV5QWTEG (-QLKOCEJK5SWCTG$NFI -QLKOCEJK%JK[QFCMW 6QM[Q ,CRCP
9GCEEGRVRWTEJCUGQTFGTU %QPVCEVWUVQCRRN[HQTCETGFKVCEEQWPV
Sales Hotline - US & Canada:
(888) 850-9911 ZZZFRPSRQHQWVRXUFHFRP
3/9/15 2:54 PM
Updating Registrations
With the registration model, each time you want to update anything related to the registration, such as the tags being used, you must redo the registration; this is required to complete the update. But in the Installation model, you can update just specific details. This means if the application needs to update one tag, the templates or other details, it’s possible to do a partial update. Here’s the code for updating a tag: // Define the partial update PartialUpdateOperation replaceTagsOperation = new PartialUpdateOperation(); replaceTagsOperation.Operation = UpdateOperationType.Replace; replaceTagsOperation.Path = "/tags/tennis"; replaceTagsOperation.Value = "cycling"; partialUpdates.Add(replaceTagsOperation); // Send the partial update _notificationHubClient.PatchInstallation( installationId, partialUpdates);
Using Notification Hubs in Azure Mobile Services
Azure Mobile Services allows you to develop an application with a scalable and secure back end (using the Microsoft .NET Framework or Node.js) hosted in Azure. With the focus on mobile applications, Azure Mobile Services provides the main features a mobile application needs, such as CRUD operations, social network authentication, offline support, push notifications and more. Push notifications are provided in Azure Mobile Services by Notification Hubs, and each time an Azure Mobile Service is created, a Notification Hub is created and associated with the Azure Mobile Service.
Azure Mobile Services allows you to develop an application with a scalable and secure back end (using the Microsoft .NET Framework or Node.js) hosted in Azure. Mobile applications that use Azure Mobile Services can implement push notifications in a quick and easy way, because the Azure Mobile Services SDK provides APIs for: • Client applications to register devices using the back end. • A back end to manage the devices in a Notification Hub; to modify requests from devices, which means the back end can add, modify or delete tags from the request made by mobile applications, or even cancel the registration; to send push notifications (using a specific template and, if tags are required, provide them). The implementation in the .NET back end will be something similar to: await Services.Push.SendAsync( GetWindowsPushMessageForToastText01( "My push notification message", tags));
28 msdn magazine
0415msdn_SilvaHubs_v3_20-28.indd 28
The method GetWindowsPushMessageForToastText01 defines the template for the push notification based on a message, and can be implemented as shown in Figure 4. In the client application, for example in Universal Apps, you must request the channel from WNS: // Get the channel var channel = await PushNotificationChannelManager. CreatePushNotificationChannelForApplicationAsync();
Then the mobile service client should be created, and it will allow interaction with the back end: // Create the MobileServiceClient using the // AzureEndpoint and admin key var mobileServiceClient = new Microsoft.WindowsAzure.MobileServices.MobileServiceClient( AzureEndPoint, AzureMobileServiceApplicationKey);
The AzureEndPoint should be something like https://mymobileservice.azure-mobile.net/ and the AzureMobileServiceApplicationKey is a string that represents the application key defined in the Azure Portal. You can use the method RegisterNativeAsync to register a device, as follows: // Register the device await MobileServiceClient.Client.GetPush().RegisterNativeAsync( channel.Uri, tags);
But if you’re going to register the device using a generic template, use the RegisterTemplateAsync method. In Azure Mobile Services (.NET back end), to extend, modify or cancel the registration for Azure Notification Hubs, the INotificationHandler interface should be implemented. By doing so, you can, for example, have secure tags.
Wrapping Up
Notification Hubs provides an abstraction to send push notifications from any back end to any mobile platform, and the new model allows you to define a unique installation that contains the essential information required by Notification Hubs. That installation can be reused, avoiding duplicate registration and complexity, for new applications or even in current applications, the installation model is recommended. There are two main patterns for registering devices to Notifications Hubs, which should be chosen according to the application requirements. This model lets you send messages focused on specific interest groups using tags, and it’s also possible to define generic or specific templates and to scale and monitor the service according to the needs of the application. Finally, easy-to-use SDKs are provided in several languages for both the mobile apps and the back end. Notification Hubs integrated by default, which is a plus for developers because it enables implementation of the push notifican tion feature in a quick and easy way. Sara Silva is a mathematics graduate and Microsoft MVP. Nowadays, she works as a mobile developer in Portugal, with a main focus on Windows applications, Xamarin and Microsoft Azure. She blogs at saramgsilva.com and can be followed on Twitter at twitter.com/saramgsilva.
ThankS to the following Microsoft technical experts for reviewing this article: Piyush Joshi, Paulo Morgado and Chris Risner
Microsoft Azure
3/11/15 8:19 AM
Untitled-2 1
5/31/13 10:57 AM
AZURE INSIDER
Event Hubs for Analytics and Visualization Bruno Terkaly There are countless examples of companies that are building brand health. What differentiates its approach is the company’s solutions based on analytics and visualization. There are many scenarios requiring large-scale data ingestion and analytics. Nobody can deny the huge growth in the social networking space, where tweets from Twitter, posts from Facebook, and Web content from blogs are analyzed in real time and used to provide company’s brand awareness. This is the first part of a multi-part series around real-time analytics and visualization, focusing on hyper-scale data flows. This level of data analysis is big business for companies such as Networked Insights, which helps brands make faster, smarter and more audience-centric decisions. Its marketing solution analyzes and organizes real-time consumer data from the social Web to produce strategic, actionable insights that inform better audience segmentation, content strategy, media investment and This article discusses: • Data analysis and visualization techniques • Using the AMQP transmission protocol • Configuring Azure Event Hubs
Technologies discussed: Microsoft Azure, Azure Event Hubs, Visual Studio 2013, Ubuntu Linux 14.02
Code download available at: github.com/brunoterkaly/msdneventhubs
ability to simplify the use of social media data by proactively classifying it across 15,000 consumer-interest dimensions in real time, leveraging computational linguistics, machine learning and other techniques. Some of those consumer interest dimensions include the use of 46 emotional classifiers that go deeper than the roots of positive and negative sentiment and into the nuances of hate, love, desire and fear so marketers understand not only how consumers feel about a product, but what to do next. For this level of analysis, the first thing you need is large-scale data ingestion. Using the social media example again, consider that Twitter has hundreds of millions of active users and hundreds of millions of tweets per day. Facebook has an active user base of 890 million. Building the capability to ingest this amount of data in real time is a daunting task. Once the data is ingested, there’s still more work to do. You need to have the data parsed and placed in a permanent data store. In many scenarios, the incoming data flows through a Complex Event Processing (CEP) system, such as Azure Stream Analytics or Storm. That type of system does stream analytics on the incoming data by running standing queries. This might generate alerts or transform the data to a different format. You may then opt to have the data (or part of it) persisted to a permanent store or have it discarded. That permanent data store is often a JSON-based document database or even a relational database like SQL Server. Prior to being placed into permanent storage, data is often aggregated,
30 msdn magazine
0415msdn_TerkAzure_v3_30-33.indd 30
3/11/15 8:19 AM
coalesced or tagged with additional attributes. In more sophisticated scenarios, the data might be processed by machine learning algorithms to facilitate making predictions. Ultimately, the data must be visualized so users can get a deeper understanding of context and meaning. Visualization often takes place on a Web-based dashboard or in a mobile application. This typically introduces the need for a middle tier or Web service, which can expose data from permanent storage and make it available to these dashboards and mobile applications. This article, and subsequent installments, will take a more straightforward example. Imagine you have thousands of devices in the field measuring rainfall in cities across the United States. The goal is to visualize rainfall data from a mobile device. Figure 1 demonstrates some of the technologies that might bring this solution to life. There are several components to this architecture. First is the event producer, which could be the Raspberry Pi devices with attached rainfall sensors. These devices can use a lightweight protocol such as Advanced Message Queuing Protocol (AMQP) to send high volumes of rainfall data into Microsoft Azure Event Hubs. Once Azure has ingested that data, the next tier in the architecture involves reading the events from Azure Event Hubs and persisting the events to a permanent data store, such as SQL Server or DocumentDB (or some other JSON-based store). It might also mean aggregating events or messages. Another layer in the architecture typically involves a Web tier, which makes permanently stored data available to Web-based clients, such as a mobile device. The mobile application or even a Web dashboard then provides the data visualization. This article will focus on the first component.
Event Producer
Because I’m sure most of you typically don’t own thousands of Internet of Things (IoT) devices, I’ve made some simplifications to the example relating to the event producer in Figure 1. In this
article, I’ll simulate the Raspberry Pi devices with Linux-based virtual machines (VMs) running in Azure. I need to have a C program running on a Linux machine, specifically Ubuntu 14.02. I also need to use AMQP, which was developed in the financial industry to overcome some of the inefficiencies found in HTTP. You can still use HTTP as your transport protocol, but AMQP is much more efficient in terms of latency and throughput. This is a protocol built upon TCP and has excellent performance on Linux. The whole point in simulating the Raspberry Pi on a Linux VM is you can copy most, if not all, of the code to the Raspberry Pi, which supports many Linux distributions. So instead of reading data from an attached rainfall sensor, the event producer will read from a text file that contains 12 months of rainfall data for a few hundred cities in the United States. The core technology for high-scale data ingestion in Azure is called Event Hubs. Azure Event Hubs provide hyper-scalable stream ingestion, which lets tens of thousands of Raspberry Pi devices (event producers) send continuous streams of data without interruption. You can scale Azure Event Hubs by defining throughput units (Tus), whereby each throughput unit can handle write operations of 1,000 events per second or 1MB per second (2MB/s for read operations). Azure Event Hubs can handle up to 1 million producers exceeding 1GB/s aggregate throughput.
Sending Events
Sending events to Azure Event Hubs is simple. An entity that sends an event, as you can see in Figure 1, is called an event publisher. You can use HTTPS or AMQP for transfer. I’ll use Shared Access Signatures (SAS) to provide authentication. An SAS is a time-stamped unique identity providing Send and Receive access to event publishers. To send the data in C#, create an instance of EventData, which you can send via the Send method. For higher throughput, you
Event Publisher loT Uploads Temperature Data Raspberry Pi
Raspberry Pi
Ubuntu Linux
Ubuntu Linux
C Program
C Program
AMQP
AMQP
Raspberry Pi
Raspberry Pi
Ubuntu Linux
Ubuntu Linux
C Program
C Program
AMQP
AMQP
Azure Datacenter
Azure Event Hubs
Event Consumer
Event Consumer
C# Program w/ Event Hub SDK
C# Program w/ Event Hub SDK
Azure Web Sites Simulation for This Article Text File w/ Rainfall Data
SQL Server/Database DocumentDB
Node.js Application (Web Server)
Figure 1 Overall Architecture of the Rainfall Analysis System msdnmagazine.com
0415msdn_TerkAzure_v3_30-33.indd 31
April 2015 31
3/11/15 8:19 AM
held at Event Hubs. As new events are sent, they’re added at the end of the partition. You can think of each partition as a separate commit log in the relational database world and work in a similar fashion. By default, each Azure Event Hub contains eight partitions. You can go beyond 32 partitions, but that requires a few extra steps. You’d only need to do that based on the degree of downstream parallelism required for consuming applications. When publishing events, you can target specific partition keys, but that doesn’t scale well and introduces coupling into the architecture. A better approach is to let the internal hashing function use a round robin event partition mapping capability. If you specify a PartitionKey, the hashing function will assign it to a partition (same PartitionKey always assigns to same partition). The round robin assignment happens only when no partition key is specified.
Get Started Figure 2 Use PuTTY to Remote in to the Ubuntu Linux VM
can use the SendBatch method. In C, AMQP provides a number of methods to provide a similar capability.
Partition Keys
One of the core concepts in Azure Event Hubs is partitions. Partition Keys are values used to map the incoming event data into specific partitions. Partitions are simply an ordered system in which events are
A great place to start is to go through the tutorial at bit.ly/1F2gp9H. You can choose your programming language: C, Java or C#. The code for this article is based on C. Remember, I’m simulating the Raspberry Pi devices by reading from a text file that contains rainfall data. That said, it should be easy to port all this code to a Raspberry Pi device. Perhaps the best place to start is to provision Azure Event Hubs at the portal. Click on App Services | Service Bus | Event Hub | Quick Create. Part of the provisioning process involves obtaining a shared access signature, which is the security mechanism used by your C program to let it write events into Azure Event Hubs.
Figure 3 Source Code in C to Send Message to Azure Event Hubs int main(int argc, char** argv) { printf("Press Ctrl-C to stop the sender process\n"); FILE * fp; char line[512]; size_t len = 0; size_t read = 0; int i = 0; int curr_field = 0; int trg_col = 0; pn_messenger_t *messenger = pn_messenger(NULL); pn_messenger_set_outgoing_window(messenger, 1); pn_messenger_start(messenger); fp = fopen("weatherdata.csv", "r"); if (fp == NULL) exit(EXIT_FAILURE); while (fgets(line, 512, fp)!=NULL) { for (i = 0; line[i] != '\0'; i++) { if (line[i] == ',') { fields[curr_field][trg_col] = '\0'; trg_col = 0; curr_field += 1; } else { fields[curr_field][trg_col] = line[i]; trg_col += 1; } } trg_col = 0; curr_field = 0; for (i = 1; i < 13; i++) {
32 msdn magazine
0415msdn_TerkAzure_v3_30-33.indd 32
}
sendMessage(messenger, i, fields[0], fields[i]); printf("%s -> %s\n", fields[0], fields[i]);
printf("\n"); } fclose(fp); // Release messenger resources pn_messenger_stop(messenger); pn_messenger_free(messenger); }
return 0;
int sendMessage(pn_messenger_t * messenger, int month, char *f1, char *f2) { char * address = (char *) "amqps://SendRule: [secret key]@temperatureeventhub-ns.servicebus.windows.net/temperatureeventhub"; int n = sprintf (msgbuffer, "%s,%d,%s", f1, month, f2); pn_message_t * message; pn_data_t * body; message = pn_message(); pn_message_set_address(message, address); pn_message_set_content_type(message, (char*) "application/octect-stream"); pn_message_set_inferred(message, true); body = pn_message_body(message); pn_data_put_binary(body, pn_bytes(strlen(msgbuffer), msgbuffer)); pn_messenger_put(messenger, message); check(messenger); pn_messenger_send(messenger, 1); check(messenger); }
pn_message_free(message);
Azure Insider
3/11/15 8:19 AM
YVOLYHFRPDXVWLQ
$XVWLŨ
JUNE 1 -4
HYATT REGENCY, AUSTIN, TX
DON’T MESS WITH CODE TRACKS INCLUDE: 9LVXDO6WXGLR1(7 ³-DYD6FULSW+70/ ³$631(7 ³0RELOH&OLHQW ³'DWDEDVHDQG$QDO\WLFV ³&ORXG&RPSXWLQJ ³:LQGRZV&OLHQW
Register by April 29 and Save $200
³
Untitled-4 1
86(35202&2'(96/$357,
6FDQWKH45FRGHWR UHJLVWHURUIRUPRUH HYHQWGHWDLOV GOLD SPONSOR
SUPPORTED BY
FLIP OVER FOR DETAILS ON VISUAL STUDIO LIVE! SAN FRANCISCO! PRODUCED BY
magazine
3/11/15 12:08 PM
vslive.com/sf
6DŨ)UDQFLVFŬ
JUNE 15 - 18
THE FAIRMONT, SAN FRANCISCO, CA
TRACKS INCLUDE: ³
9LVXDO6WXGLR1(7
³
-DYD6FULSW+70/
³
0RELOH&OLHQW
³
'DWDEDVHDQG$QDO\WLFV
³
&ORXG&RPSXWLQJ
³
:LQGRZV&OLHQW
Untitled-4 2
CODE BY THE BAY
Register by April 15 and Save $300! 86(35202&2'(96/$357,
6FDQWKH45FRGHWR UHJLVWHURUIRUPRUH HYHQWGHWDLOV
-RLŨXƖRŨWKĠ8OWLPDWĠ&RGĠ7ULSLŨ 3/11/15 11:58 AM
Send.c
Figure 4 Partial View of Text File with Rainfall Data
Once you provision in Azure Event Hubs, you’ll also get a URL you’ll use to point to your specific instance of Azure Event Hubs inside the Azure datacenter. Because the SAS key contains special characters, you’ll need to encode the URL, as directed at bit.ly/1z82c9j.
Provision an Azure VM and Install AMQP
Because this exercise will simulate the IoT scenario by provisioning a Linux-based VM in Azure, it makes sense to do most of your work on a full-featured VM in the cloud. The development and testing environment on a Raspberry Pi is limited. Once you get your code up and running in the VM, you can simply repeat this process and copy the binaries to the Raspberry Pi device. You can find additional guidance on provisioning the Linux-based VM on Azure at bit.ly/1o6mrST. The code base I have used is based on an Ubuntu Linux image. Now that you’ve provisioned Azure Event Hubs and a VM hosting Linux, you’re ready to install the binaries for the AMQP. As I mentioned earlier, AMQP is a high-performance, lightweight messaging library that supports a wide range of messaging applications such as brokers, client libraries, routers, bridges, proxies and so on. To install AMQP, you’ll need to remote into the Ubuntu Linux machine and install the AMQP Messenger Library from bit.ly/1BudbhA. Because I do most of my work from a Windows machine, I use PuTTY to remote into the Ubuntu image running in Azure. You can download Putty at putty.org. To use PuTTY, you’ll need to get your VM URL from the Azure Management Portal. Developers using OS X can simply use SSH to remote in from the Mac terminal. You might find it easier to install AMQP on CentOS instead of Ubuntu as directed at bit.ly/1F2k47z. Running Linux-based workloads in Azure is a popular tactic. If you’re already accustomed to Windows development, you should get familiar with programming in the Linux world. Notice in Figure 2 that I’m pointing to my Ubuntu VM (vmeventsender.cloudapp.net).
Once you’ve installed AMQP on your Ubuntu VM, you can take advantage of the example provided by the AMQP installation process. On my particular deployment, here’s where you’ll find the send.c file: /home/azureuser/ dev/qpid-proton-0.8/examples/ messenger/c/send.c. (The code download for this article includes my edited version of send.c.) You’ll essentially replace the default example installed by AMQP with my revised version. You should be able to run it as is, except for the URL pointing to Azure Event Hubs and the encoded shared access signature. The modifications I made to send.c include reading the rainfall data from weatherdata.csv, also included in the code download for this article. The code in Figure 3 is fairly clear. The main entry method starts by opening the text file, reading one line at a time and parsing the rainfall data into 12 separate pieces—one for each month. Figure 4 gives you a partial view of the text file with the rainfall data. The C code will connect to the Azure Event Hubs instance you provisioned earlier, using the URL and the encoded shared access signature. Once connected, the rainfall data will load into the message data structure and be sent to Azure Event Hubs using pn_messenger_send. You can get a complete description of these methods at bit.ly/1DzYuud. There are only two steps remaining at this point. The first is to actually compile the code, which simply involves changing a directory and issuing the make install command. The final step is to actually run the application you just created (see Figure 5): // Part 1 – Compiling the code cd /home/azureuser/dev/qpid-proton-0.8/build/examples/messenger/c make install // Part 2 – Running the code cd /home/azureuser/dev/qpid-proton-0.8/build/examples/messenger/c ./send
Wrapping Up
In the next installment, I’ll show what it takes to consume the events from Azure Event Hubs and Azure Stream Analytics, and store the data in a SQL database and a JSON-based data store known as Azure DocumentDB. Subsequent articles will delve into exposing this data to mobile applications. In the final article of this series, I’ll build a mobile application that provides a visualization of the rainfall data. n Bruno Terkaly is a principal software engineer at Microsoft with the objective of
enabling development of industry-leading applications and services across devices. He’s responsible for driving the top cloud and mobile opportunities across the United States and beyond from a technology-enablement perspective. He helps partners bring their applications to market by providing architectural guidance and deep technical engagement during the ISV’s evaluation, development and deployment. Terkaly also works closely with the cloud and mobile engineering groups, providing feedback and influencing the roadmap.
Figure 5 Output from the Running Code msdnmagazine.com
0415msdn_TerkAzure_v3_30-33.indd 33
Thanks to the following Microsoft technical experts for reviewing this article: James Birdsall, Pradeep Chellappan, Juan Perez, Dan Rosanova
April 2015 33
3/11/15 8:19 AM
MICROSOFT AZURE
Automate Creating, Developing and Deploying Azure Websites Gyan Jadal Microsoft Azure is a quick and easy way to create and deploy
Web sites or APIs for your cloud consumers. Much has been written on different aspects of Azure Websites, including security, deployment and so on. It’s always a challenge to bring this information into a real-life implementation. In this article, I’ve pulled together these various facets to help you quickly create a production-ready Azure Website. I’ll walk you through the end-to-end lifecycle of creating, deploying, configuring and monitoring an Azure Website. I’ll start by creating a Web site with Visual Studio 2013 and deploy it to Azure. Then I’ll set up continuous integration from my source code repository to Azure, establishing authentication using the API key or certificates. Finally, I’ll show how to create and deploy the site to a production environment using Windows PowerShell.
Using Visual Studio 2013, I’ll create a new ASP.NET Web application by choosing empty template. You can select WebAPI if you want to expose an interface using the same Web site. I select the AppInsights and Hosted in Azure checkboxes. I don’t choose any Authentication for now because I’ll add it later. I’ll also configure Application Insights for deploying to different environments. When it prompts for Azure Website details, as shown in Figure 1, I enter the Site name, select a Region and click OK. This will set up an empty Web site and publish it to Azure. Now, I’ll add some content to the site and use the Open Web Interface for .NET (OWIN) as the middleware. To enable OWIN, I add the following NugGet Packages: Microsoft.Owin.Host.SystemWeb and Microsoft.AspNet.WebApi.Owin.
Create an Azure Web Site
This is the simplest step of the whole process. I’ll create an Azure Website and add a controller, skipping the business logic inside it to illustrate the key points. To start, I need the Windows Azure SDK for .NET 2.5, which can be downloaded from bit.ly/1GlhJpu. This article discusses: • Creating and configuring an Azure Website • Setting up authentication • Setting up automatic deployment for various environments
Technologies discussed: Microsoft Azure, Visual Studio 2013, Application Insights
Code download available at: msdn.microsoft.com/magazine/msdnmag0415
Figure 1 Configure Microsoft Azure Website Details
34 msdn magazine
0415msdn_JadalAzure_v5_34-40.indd 34
3/11/15 8:25 AM
After installing the NuGet packages, I right-click on the Project and select Add OWIN startup class. I’ll name the class Startup and add the following code to the Configuration method, which sets up the controller routing: public void Configuration(IAppBuilder app) { var config = new HttpConfiguration(); config.Routes.MapHttpRoute("default", "{controller}/{id}", new { id = RouteParameter.Optional}); app.UseWebApi(config); }
Next, I’ll add a controller to provide an API by right-clicking the Project and clicking Add Web API Controller class. I’ll name the class MyWebController and leave the generated code alone. Next, I right-click the Project and Publish. In the publish dialog under the Settings tab, I choose the Debug configuration so the site can be debugged remotely. Then I click Next and Publish. When I navigate to my Azure Website, the controller Get method is invoked by default, which returns a JSON string as [“value1”,“value2”]. My Web site is up and running on Azure. So far, so good. Now, I’ll add a Client application to invoke the Web site API I just exposed on the cloud. I create a new Console application called MyConsoleClient and add the ASP.NET Web API 2.2 Cient Libraries NuGet Package. This package provides the HttpClient object I’ll use to invoke the API. The following code is added to the main method: var httpClient = new HttpClient(); var requestMineType = new MediaTypeWithQualityHeaderValue("application/json"); httpClient.DefaultRequestHeaders.Accept.Add(requestMineType); var httpResponse = httpClient.GetAsync( "https://MyAzureWebsiteSample.azurewebsites.net/MyWeb").Result; Console.WriteLine(httpResponse.Content.ReadAsStringAsync().Result); Console.ReadLine();
Now I’ll run the client and see how the controller Get method is invoked and the [“value1”,“value2”] is printed on the console. I published Debug configuration for my Web site. Although you can debug the site locally, sometimes it’s helpful to remotely debug the published Web site while it’s still in development. To debug the published site, I go to its Configure tab on the portal and turn on remote debugging, selecting the version of Visual Studio being used (see Figure 2). To debug the Web site remotely and debug my console client, I put a break on the GetAsync line. Next, I open Server Explorer, rightclick the Web site and select Attach Debugger, as shown in Figure 3. Now I can step into the code and debug the site remotely from my machine and commit my changes to the source repository.
Set up Continuous Integration
Now I’ll work through integrating my site with the Source Repository for Continuous Integration (CI). How cool it would be that, when you check in code to your repository, the Web site project is built and automatically deployed to Azure? The Integrate Source Control feature of Azure Websites gives you an easy way to link your source repository to the Azure Website for continuous integration. On the Azure portal, I navigate to the home tab of the Web site and click on Set up deployment from source control, as shown in Figure 4. Depending on what you’re using as your source control provider, you have options to choose from Visual Studio Online, local Git repository, GitHub and so on. As my code is in the Visual Studio msdnmagazine.com
0415msdn_JadalAzure_v5_34-40.indd 35
Figure 2 Enabling Remote Debugging for Microsoft Azure Websites
Online repository, I choose it on the next screen. Next, I enter my Visual Studio Online account name and authorize my account. I have to provide consent to allow the connection request from Azure to my Visual Studio Online account. Then I choose the Repository name, which is my team project name. After I click Next, the link is established between my Azure Website and Visual Studio Online team project. In the Builds option in Team Explorer in Visual Studio, I should now see the build definition created for me as MyAzureWebsiteSample_ CD so I can right-click and edit the definition. The queue processing is disabled by default on the general tab, so I’ll set it to Enabled. In the process parameters under Build/Projects, I browse and select the solution to build.
Although you can debug the site locally, sometimes it’s helpful to remotely debug the published Web site while it’s still in development. At this point, I’ll make any other changes to the process parameters in the Process tab. While on the Process tab, take a look at the Deployment parameter. This is where integration is set up for the build definition to the Azure service. If you have multiple projects in the repository, the first Web site as alphabetically ordered will be deployed. To avoid this, ensure you have a separate solution for each Web site for which you have CI enabled. Check in your code or right-click and queue a build. You should see the Web site being built and published to Azure. You can verify whether continuous integration is working by clicking on the Deployments tab of the Web site, as shown in Figure 5. Here, you’ll see the last time a successful deployment took place from the CI build.
Configure Authentication
The Azure Website is now ready for development, adding business functionality and reaping the benefits of CI. You’ll want to set up authentication for the site so you can authenticate and authorize consumers. There are different scenarios for this setup, details of which you’ll find at bit.ly/1CHVni3. April 2015 35
3/11/15 8:25 AM
Before I set up my Azure Website for authentication, I’ll add Authorize attribute to my controller and publish the site. Running the client to access the site will return the following message: {"Message": "Authorization has been denied for this request."}
Here, I’ll enable authentication on my Azure Website and update the console client to provide either the API key or certificate credentials for authentication. Authentication and token generation is delegated to Azure Active Directory (AAD). Figure 6 shows the authentication workflow between the client, AAD and the service. For AAD Authentication, both the client and Web site should have corresponding application entries registered on AAD. To do this, I navigate to the Active Directory tab on the portal and select the org. directory and click Applications. Next, I select Add an application/Add an application my org. is developing, and enter the name of the Website.
How cool would it be that, when you check in code to your repository, the Web site project is built and automatically deployed to Azure?
certificate authentication, I need to associate my certificate with the Client application entry in AAD. To do this, I’ll use the Azure AD Module for Windows PowerShell to automate the process. You can download this at bit.ly/1D4gt8j. I run the following Windows PowerShell commands to import a .cer file from the local machine and upload the certificate to the client application entry: Connect-msolservice $cer = New-Object System.Security.Cryptography.X509Certificates.X509Certificate $cer.Import("$pwd\MyConsoleClientCert.cer") $binCert = $cer.GetRawCertData() $credValue = [System.Convert]::ToBase64String($binCert); New-MsolServicePrincipalCredential -AppPrincipalId "" -Type asymmetric -Value $credValue -StartDate $cer.GetEffectiveDateString() -EndDate $cer.GetExpirationDateString() -Usage verify
The clientid in this command is the one I copied in the previous step. To verify the certificate uploaded successfully, I run the following cmdlet providing the clientid: Get-MsolServicePrincipalCredential -AppPrincipalId "" -ReturnKeyValues 1
This will display all certificates associated with the clientid. In this case, it should display one. I’m done setting up my apps in AAD. The next step is to enable the Web site and client for which I’ll use AAD for authentication. First, I’ll update MyAzureWebsiteSample project to be set up for AAD. I’ll add the following OWIN security NuGet packages related to authentication: Microsoft.Owin.Security and Microsoft.Owin.Security.ActiveDirectory. In the Startup class of the MyAzureWebsiteSample, I add the following code to the Configuration method: app.UseWindowsAzureActiveDirectoryBearerAuthentication( new WindowsAzureActiveDirectoryBearerAuthenticationOptions { TokenValidationParameters = new TokenValidationParameters { ValidAudience = "https://[mydomain].onmicrosoft.com/MyAzureWebsiteSample", ValidateAudience = true, }, Tenant = "[mydomain].onmicrosoft.com" });
In my case, I enter MyAzureWebsiteSample, then select the Web application or Web API option and click Next. On the next screen, I enter the address to open my site at MyAzureWebsiteSample.azurewebsites.net for the Sign on URL and [mydomain].onmicrosoft.com/MyAzureWebsiteSample as the App ID URI and click OK. This will create an AAD application You’ll replace [mydomain] with the name of your AAD directory for the Web site. Next, I add the Client application entry to the AAD, repeating the same steps. For the Sign on URL and App ID URI, I domain. This will tell the Web site to use AAD authentication for the specified audience URI. Clients accessing the service will need to proenter http://myconsoleclient. Because this is a client app, it doesn’t have to be a physical URL; vide this URI for getting a token from AAD. On the client console, I a valid URI is sufficient. Now I want to let the client access the use the Active Directory Authentication Library, also called ADAL, MyAzureWebsiteSample application once it has been authenti- for authentication logic and add the MyConsoleClient NuGet package to the console application. cated. For this, on the Configure tab, in the Permissions to other applicaADAL tions section, I click Add Application, In the client, I’ll use APIs provided by which will open a pop-up window. ADAL to acquire a token from AAD I select Show Other and search for by providing the certificate credenMyAzureWebsiteSample, then select tials. I replace the previously added it and click OK. Back on the Configcode with the code in Figure 8, which ure tab, in Delegated Permissions is a includes authentication. permission to access MyAzureWebYou’ll replace the clientid and the siteSample (see Figure 7). I check it name of my certificate with your own. and then click Add application. I’ve abstracted the certificate retrievAlso, I’ll note the ClientId for the al code into the GetX509CertObject Client application entry because method, which reads the certificate I’ll need it soon. If you want to use store and creates the X509Certificate API Key for authentication, then a object. If you use the APIKey instead key would be created on this configuration page. Because I’m using Figure 3 Attach Debugger to the Published Web site of certificate authentication, then 36 msdn magazine
0415msdn_JadalAzure_v5_34-40.indd 36
Microsoft Azure
3/11/15 8:25 AM
$ ) *+$"%&,- .
! " # $%%& ' ( ! )*#+ ,-
!" ., !,, ., !/, ., !0
# $ " " " 1 2
/0(* 1 ! "#$%&"'
Untitled-5 1
((( "#$%&"'
3/9/15 2:53 PM
The following code illustrates a few noteworthy points from the script. First, the Add-AzureAccount displays an interactive window prompting me to sign in to my Azure account. The Select-AzureSubscription sets the current Subscription. Then I switch modes to ResourceManager and create a new ResourceGroup and add my Web site as a new resource: $SubscriptionId = "" Add-AzureAccount Select-AzureSubscription -SubscriptionId $SubscriptionId -Current Switch-AzureMode -Name AzureResourceManager $myResourceGroup = New-AzureResourceGroup -Name "MyResourceGroup" -Location "West US" $mywebsite = New-AzureResource -Name "MyAzureWebsiteSample" -ResourceType "Microsoft.Web/sites" -ResourceGroupName "MyResourceGroup" -Location "West US" -ApiVersion "2014-04-01" -PropertyObject @{}
Figure 4 Set up Deployment from Source Control
you’ll construct a ClientCredential object passing in the clientId and the APIKey noted down from the portal, instead of passing the ClientAssertionCertificate object. If no client credentials are provided to acquire the token, you’ll be prompted for AAD tenant user credentials. I run the code in Figure 8 to authenticate the client and invoke the MyAzureWebsiteSample controller method and print [“value1”, “value2”]. The controller will be presented with the Claims of the client I can use to further authorize the client.
Automate Deployment
So far, I’ve worked with a development environment to deploy the Azure Website. I just check in the code and the site is deployed, or right-click and it’s published—it’s that easy. However, that’s not the case for QA or production environments. It may not be feasible to deploy from Visual Studio. Now I’ll show you how to automate deployment to any environment using Windows PowerShell. I can perform all these steps from the Portal, as well, but that will involve many manual steps. Azure PowerShell provides cmdlets to create Azure resources such as Web sites, storage and so on. I’ll go over a few of the important aspects of my automation script. The full script is available in the source code download accompanying this article. The resource manager mode in Azure PowerShell lets you logically group multiple resources like Web site, redis cache, SQL Azure and others into a resource group. I use the Switch-AzureMode cmdlet to switch my Azure PowerShell environment between ResourceManager mode and ServiceManagement mode.
I use the resourcetype as Microsoft.Web/sites. There are other resourcetypes such as Microsoft.Sql/servers/databases you can create with the New-resource cmdlet. In this snippet, I’ll create an AzureStorageAccount to associate it to my Web site for storing diagnostics tracing. Here I am not using ResourceManager cmdlets. I created the AzureStorageAccount outside my ResourceGroup. At this time, adding storage accounts to ResourceGroups is not yet supported. To create a table in the storage, I need to obtain the context. I get the context by passing in the storageAccountKey. The storageAccountKey is passed from one cmdlet to another without the user having to know specifics: $myStorage = New-AzureStorageAccount -StorageAccountName "MyStorageAccount" -Location "West US" $storageAccountKey = Get-AzureStorageKey -StorageAccountName "MyStorageAccount" $context = New-AzureStorageContext -StorageAccountName "MyStorageAccount" -StorageAccountKey $storageAccountKey.Primary $logTable = New-AzureStorageTable -Name $MyStorageTableName -Context $context
Azure PowerShell provides cmdlets to create Azure resources. The Enable-AzureWebsiteApplicationDiagnostic cmdlet will enable the diagnostics using the storage account I provided: Enable-AzureWebsiteApplicationDiagnostic -Name "MyAzureWebsiteSample" -Verbose -LogLevel Information -TableStorage -StorageAccountName "MyStorageAccount" –StorageTableName $MyStorageTableName
For any Web site, there are some settings in the AppSettings of the web.config that are environment-specific. The following snippet shows how to replace Appsettings for an Azure Web site based on AAD 1. Request Token (Provide Certificate/ Key Credentials)
4. Authenticate Token
2. Return Token
Client
MyAzureWebsiteSample 3. API Call with Token
Figure 5 Deployment History with Continuous Integration 38 msdn magazine
0415msdn_JadalAzure_v5_34-40.indd 38
Figure 6 Azure Active Directory Authentication Workflow Microsoft Azure
3/11/15 8:25 AM
Untitled-5 1
3/9/15 2:58 PM
Figure 8 Add Authentication to the API var httpClient = new HttpClient(); var requestMineType = new MediaTypeWithQualityHeaderValue("application/json"); httpClient.DefaultRequestHeaders.Accept.Add(requestMineType); var authContext = new AuthenticationContext( "https://login.windows.net/[mydomain].onmicrosoft.com");
Figure 7 Setting Permissions on Client Application to Access the Azure Website in Azure Active Directory
the environment to which it’s being deployed. These settings will override the values present in the Web site’s web.config: $appSettings = @{ "CurrentEnv" = “Test”; "ida:Audience" = “https://[mydomain].onmicrosoft.com/MyAzureWebsiteSample”; "ida:ClientId" = "8845acba-0820-4ed5-8926-5652s5353s662"; "ida:Tenant" = "[mydomain].onmicrosoft.com"; "any:other" = "some other setting"; }
var clientAssertionCertificate = new ClientAssertionCertificate("", GetX509CertObject("CN=MyClientConsoleCert")); var authResult = authContext.AcquireTokenAsync( "https://[mydomain].onmicrosoft.com/MyAzureWebsiteSample", clientAssertionCertificate).Result; httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", authResult.AccessToken); var httpResponse = httpClient.GetAsync( "https://MyAzureWebsiteSample.azurewebsites.net/MyWeb").Result; Console.WriteLine(httpResponse.Content.ReadAsStringAsync().Result); Console.ReadLine();
Set-AzureWebsite -Name “MyAzureWebsiteSample” -AppSettings $appSettings
A collection of key values are created and set on the Web site by invoking Set-AzureWebsite. This will override the settings in the web.config. You can also change this from the portal later. So far, I’ve set up my Web site’s infrastructure. Now, I want the application logic in my service to be published. I’ll use the PublishAzureWebsiteProject cmdlet to do this job. I have to provide my sitename and the package produced as part of the build output for my project: Publish-AzureWebsiteProject -Package "" -Name "MyAzureWebsiteSample"
I took the package path from my build output: drop\_PublishedWebsites\MyAzureWebsiteSample_Package\MyAzureWebsiteSample.zip
The script also uploads the certificate to AAD. You can download the full script from the code download accompanying this article. You can also check bit.ly/1zTBeQo to learn more about the feature-rich experiences the Azure Websites portal provides for managing Web site settings and secrets as you move from development to integration to production deployment environments.
Enable Application Insights
Once your site is in production, you can get the telemetry to understand how the site is behaving in terms of scaling and performance needs. Application Insights provides a telemetry solution for Azure deployed resources. To enable Application Insights, I had to select it as part of the site template. This adds the necessary NuGet packages and an ApplicationInsights.config file (which contains the instrumentation key) to the project. Now when the site is published, the ApplicationInsights resource is created and linked with the key in the config file. For production environments, create an ApplicationInsights resource on the new portal. At this time, there are no Azure PowerShell cmdlets available to set up the AppInsights on the portal. From the properties, make a note of the Instrumentation key. You need to add this key to the ApplicationInsights.config file before the Web site is published to ensure it’s deployed with the site. However, there’s no way to update this file during site publishing. You can get around this issue by not relying on the key from the ApplicationInsights.config file. Instead, add the key to appsettings. This way, you can override it for each environment. How does the 40 msdn magazine
0415msdn_JadalAzure_v5_34-40.indd 40
site know to use this key? For that, I updated the startup code in my Web site to read the key from the web.config and set it in the App Insights configuration object: Microsoft.ApplicationInsights.Extensibility. TelemetryConfiguration.Active.InstrumentationKey = "";
I also removed the key from my ApplicationInsights.config file because it’s no longer required. Now I pass the key to my automation Azure PowerShell script. This updates the appsettings during site deployment. Once you’ve set up Application Insights, you can view performance data on the portal or by installing monitoring agents.
Azure has come a long way in providing rich and powerful APIs using Azure PowerShell and the new enhanced portal. Wrapping up
This is a basic step-by-step article for taking an Azure Website from a development environment and setting up authentication and automating deployment in Azure, all the way to a production environment. Azure has come a long way in providing rich and powerful APIs using Azure PowerShell and the new enhanced portal at portal.azure.com. You can extend this setup to add your own resources such as SQL Azure or additional storage resources. Using the techniques mentioned here, you should be able to quickly spin up a new Azure Website and set up authentication and automation for deployments to any environment. n G yan J adal is a software engineer working for Microsoft. He has extensive experience designing and developing solutions for applications based on Azure and enterprise on-premises systems. Reach him at [email protected].
Thanks to the following Microsoft technical expert for reviewing this article: Mani Sengodan
Microsoft Azure
3/11/15 8:25 AM
YVOLYHFRP
1$9,*$7(7+( 1(7+,*+:$