Quantcast
Channel: Geekswithblogs.net
Viewing all 3624 articles
Browse latest View live

Compare Email Marketing and Social Media Marketing


FTP Adapter - No such host is known Please check the configuration and The URI scheme is not valid

$
0
0

Originally posted on: http://geekswithblogs.net/RobBowman/archive/2015/06/24/ftp-adapter---no-such-host-is-known--please.aspx

Earlier today I hit the following errors (found in the event log) when trying out a newly deployed FTP send port:

 

The adapter failed to transmit message going to send port "DCSendLimaPurchaseOrder_FTP" with URL "ftp://<servername>:21/uat/GO_PO_XML/inbound/pending/File_To_Lima_%datetime_bts2000%.xml". It will be retransmitted after the retry interval specified for this Send Port. Details:"DNS Lookup for the server "ftp://<servername>:21" failed with the following error message: No such host is known

 Please check the configuration. ".

 

And

 

The adapter failed to transmit message going to send port "DCSendLimaPurchaseOrder_FTP" with URL "<servername>:21/uat/GO_PO_XML/inbound/pending/File_To_Lima_%datetime_bts2000%.xml". It will be retransmitted after the retry interval specified for this Send Port. Details:"Invalid URI: The URI scheme is not valid.".

 

The FTP Adapter is very particular about the format of its <uri> and <serverAddress> elements. After much head scratching today I finally came up with the set of values for the btdfproj file that would work:

 

PortBindingsMaster.xml

<PrimaryTransport>

<Address>ftp://${LimaHost}:21/${LimaSendPath}/${LimaSendFilename}</Address>

<TransportType Name="FTP" Capabilities="80907" ConfigurationClsid="3979ffed-0067-4cc6-9f5a-859a5db6e9bb" />

<TransportTypeData>

<CustomProps>

<AdapterConfig vt="8">

<Config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">

<uri>ftp://${LimaHost}:21/${LimaSendPath}/${LimaSendFilename}</uri>

<serverAddress>${LimaHost}</serverAddress>

<serverPort>21</serverPort>

<userName>${LimaUserName}</userName>

<password>${LimaPassword}</password>

<accountName />

<targetFolder>${LimaSendPath}</targetFolder>

<targetFileName>${LimaSendFilename}</targetFileName>

<representationType>binary</representationType>

<allocateStorage>False</allocateStorage>

<appendIfExists>False</appendIfExists>

<connectionLimit>0</connectionLimit>

<passiveMode>True</passiveMode>

<firewallType>NoFirewall</firewallType>

<firewallAddress />

<firewallPort>21</firewallPort>

<useSsl>False</useSsl>

<useDataProtection>True</useDataProtection>

<ftpsConnMode>Explicit</ftpsConnMode>

</Config>

</AdapterConfig>

 

SettingFileGenerator.xml

 

LimaHost

<servername>

LimaSendPath

uat/GO_PO_XML/inbound/pending

LimaPassword

XXX

LimaUserName

username

Error “Couldn’t install programs” during installing Windows Live Writer

$
0
0

Originally posted on: http://geekswithblogs.net/mapfel/archive/2015/06/24/165315.aspx

By downloading the latest version of WLW you normally get the online installer. It is a small program wlsetup-web.exe which downloads during setup the necessary components.

Unfortunately this setup raised on different machines the error “Couldn’t install programs”.

image

Thankfully there is also an offline installer available, containing the complete bundle of live tools:

http://windows.microsoft.com/en-US/windows-live/essentials-install-offline-faq

According some information in the web it is recommended to use “plain English”, otherwise the installer seems to try to connect to unavailable resources which ends up also in problems.

 

Of course, that wlsetup-all.exe is a little bit larger:

image

But it let you select the live tool what you need and after the installation everything is fine

image

image

Using Git for versioning of Word (doc/docx) documents

$
0
0

Originally posted on: http://geekswithblogs.net/mapfel/archive/2015/06/24/165317.aspx

In too many projects I saw a chaos to store different versions of documents (in case these documents are edited over the time and older versions were kept for later analysis).

Each colleague has an own understanding how to name documents and how to add information about the status/version of the document. So it is not seldom to see timestamps as pre- or suffixes, user and user-acronyms as suffixes, and version numbers as suffixes. It gets really interesting if multiple users co-work on the same document and introduce their own naming guideline. After a few versions you are completely lost to know which document is the most recent version. Sometimes you can hope that sorting for timestamps of OS (Date modified) in the File Explorer leads you to the right one. But too often somebody opens a documents and changes by coincidence something inside (e.g. auto fields, like dates) and confirms the save question during closing with Yes. Then you have an older version with a newer timestamp.

Normally we use the compressed (without the dashes) ISO timestamp format as a prefix, but too often that rule was initially broken and later adjusted. So it leads to folder contents like that one

image

The first two files don’t follow the rule.

Imagine now the situation, that a lot of other and different files are inside the folder. It is almost impossible to recognize what belongs together. It get even more difficult, if the file name changed in the mean time.

 

I’m not happy with all the different versions inside the folder and I’m also not happy that we enrich the file name to have version information inside. But it gives the benefit, that with one view you immediately now how from when the document is – notably after sending the file between different parties.

Some years ago I tried to address that problem with Git – having only one file to each topic inside the folders and get the older versions out of the repository in case of necessity. But during that time (2011??) it was not handy. I cannot exactly remember the issues, but it was not worth to introduce that.

But yet the situation changed. Git can recognize the renaming of these doc files (e.g. if a new timestamp is prefixed or versions is suffixed) and contemporaneously track the changes inside these Word documents.

So I gave it a try to add these different versions each by each to the repository to fake the evolvement of the past. To be precise: what normally happens during work with the document in an ideal world (edit, rename, commit changes) I would do in a few minutes. Important was that it keeps to things historical together – and not to get individual commits or independent versions of the files.

 

To bring all the already created versions in a meaningful versioned way to a repository, I did the following.

  1. Move all the files to a temporary folder
  2. Run git init in the empty folder to create a local repository here
  3. Move the first version of the file (Folgeprojekt_Leistungsbeschreibung V1.doc) into that folder
  4. add it to the index (git add Folgeprojekt_Leistungsbeschreibung\ V1.doc)
  5. Commit that change set (git commit –m “initial commit with statement of work for the follow-up project, version 1”)
  6. Delete this file
  7. Move the second version into that folder
  8. Run git status to see the new (for the moment untracked) file and the deleted (already tracked) old/previous one
    image
  9. add the new one to the index (git add Folgeprojekt_Leistungsbeschreibung\ V2.doc)
  10. add the deletion of the previous version to the index (git add Folgeprojekt_Leistungsbeschreibung V1.doc)
  11. run git status to see that Git is able to recognize the renaming
    image
  12. Commit that changeset (git commit –m “add statement of work for the follow-up project, version 2”)
  13. Repeat the steps 6 till 12 for all the other versions

In the log you can see now all the individual changes in one line of the history of that file.

e.g. the diff between the version 1 and 2 shows this:

image

I guess, that via the similarity index

image

Git was able to understand, that a deleted and new file is only a new version – a renaming here.

image

So Git can create the history line of a given file. I was really surprised to see that, because the doc format is a binary and it needs some additional steps in the background to get that understanding.

Of course you see also the individual changes of the file

image

 

I tried this also with the docx Word format. It works in the same way as described above.

 

Summary

Given these capabilities in future projects I enforce the team to keep only the most recent version of a document in a folder. All versioning is to be done with the repository. This gives clean folders where you immediately get an overview about the distinct documents and it avoids the situation where you have to open multiple documents to understand the evolvement of a document over the time.

And additionally you have the opportunity to follow the evolvement by diffing the individual versions. With hopefully good commit messages you have another information what happens from one version to the next one.

How-Old.net and TwinsOrNot.net Implementation Details

Get a list of all columns that participate in Foreign Key relationships

$
0
0

Originally posted on: http://geekswithblogs.net/SoftwareDoneRight/archive/2015/06/25/get-a-list-of-all-columns-that-participate-in-foreign.aspx

The following query returns all the column information for columns in the specified table that participate in a FK relationship.

You can modify the query to return PK information by changing the constriant_type filter.

 

select * from information_schema.columns
where table_name = <TableName> AND table_schema=<Schema>and column_name not in (SELECT Col.Column_Name  from
    INFORMATION_SCHEMA.TABLE_CONSTRAINTS Tab,
    INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE Col
WHERE
    Col.Constraint_Name = Tab.Constraint_Name
    AND Col.Table_Name = Tab.Table_Name
    AND Constraint_Type = 'FOREIGN KEY'
    AND Col.Table_Name = <TableName>)

Connect to Nonstandard OData Services

$
0
0

Originally posted on: http://geekswithblogs.net/dataintegration/archive/2015/06/26/connect-to-nonstandard-odata-services.aspx

The CData ADO.NET Provider for OData enables you to expose Web services as a fully managed ADO.NET data source. It allows you to access almost any OData service from native ADO.NET tools. You can follow the procedure below to access OData sources that do not conform exactly to the OData protocol. This article will add support for the Microsoft Research service, which does not implement some common functionality, including support for retrieving metadata.

Define a Custom Schema File

The CData providers allow you to write custom schema files that define the metadata for the tables in your data source. This is useful when accessing OData services that do not implement the "$metadata" service metadata document. Additionally, storing metadata locally increases performance because it does not need to be retrieved from the data source each time.

Follow the steps below to create a schema file for the Downloads table in the Microsoft Research service.

  1. Use an existing schema as a template: Navigate to the db folder in the installation directory and make a copy of the sys_data.rsd schema file. Name the new file the same name as the table you want to connect to.
  2. In the new file (Downloads.rsd), delete all the rows between the rsb:info tags. On the rsb:info node, change the Title attribute to match the name of the file (without the .rsd extension).
  3. Get the listing of columns by visiting the URL for the table in your browser, for example, http://odata.research.microsoft.com/odata.svc/Downloads. There should be an XML document returned with all the tables we are ultimately interested in retrieving with the OData data provider. You might need to view the page source here to see the actual XML in your browser. Look for the m:properties node under one of the entries returned. Each of the child elements here can be a field in the schema.
  4. Define columns for the fields that you want to have access to: Define an attr entry that (at a minimum) has the name and xs:type attributes. Here's a basic example for the Downloads table:
    <attr name="ID"          xs:type="integer"  key="true" readonly="true" description="The primary key for the Downloads table." ></attr>
    <attr name="Name"        xs:type="string" ></attr>
    <attr name="Downloads"   xs:type="long" ></attr>

    The preceding example uses the following optional attributes:

    • key: This attribute marks the field as the primary key. It is not necessary to have a primary key if the OData service is read-only.
    • readonly: If set to "true", this attribute disallows updates for this field.
    • description: This attribute provides a description.

    To specify complex data types, such as arrays, you can add the following attributes:

    • other:int_ColumnName: To denote child elements, set this attribute to the path of the field. Use periods instead of slashes.
    • other:datasourcedatatype: This attribute specifies the type of the root element followed by the EDM type of the child element.

    For example, on the Northwind Suppliers table, the other:int_ColumnName of "Address_Street" is "Address.Street" and the other:datasourcedatatype is "ODataDemo.Address.Edm.String".

  5. Add the following line exactly as is in order to support paging.
  6. <input name="rows@next" desc="A system column used for paging. Do not change.">

You can find the complete script below.

<rsb:script xmlns:rsb="http://www.cdata.com/ns/rsbscript/2">
<rsb:info title="Downloads" description="This is an example table showing how build a custom schema file to connect to an OData source that does not conform exactly to the OData protocol.">
<attr name="ID"               xs:type="integer"  key="true" />
<attr name="Name"             xs:type="string"  ></attr>
<attr name="Downloads"        xs:type="long"  ></attr>
<attr name="FileName"         xs:type="string"  ></attr>
<attr name="FileSize"         xs:type="integer"  ></attr>
<attr name="Description"      xs:type="string"  ></attr>
<attr name="Version"          xs:type="string"  ></attr>
<attr name="Picture"          xs:type="string"  ></attr>
<attr name="ResearchAreas"    xs:type="string"  ></attr>
<attr name="Tags"             xs:type="string"  ></attr>
<attr name="URL"              xs:type="string"  ></attr>
<attr name="Eula"             xs:type="string"  ></attr>
<attr name="DateUpdated "     xs:type="datetime"  ></attr>
<attr name="DateCreated"      xs:type="datetime"  ></attr>
 
<input name="rows@next" description="A system column used for paging. Do not change." />
</rsb:info>
 
<rsb:script method="GET">
<rsb:call op="odataadoExecuteSearch" in="_input">
<rsb:push />
</rsb:call>
</rsb:script>
 
<rsb:script method="MERGE">
<rsb:call op="odataadoExecuteUpdate" input="_input">
<rsb:push />
</rsb:call>
</rsb:script>
 
<rsb:script method="POST">
<rsb:call op="odataadoExecuteInsert" input="_input">
<rsb:push />
</rsb:call>
</rsb:script>
 
<rsb:script method="DELETE">
<rsb:call op="odataadoExecuteDelete" input="_input">
<rsb:push />
</rsb:call
</rsb:script>
</rsb:script>

Query the Table

After adding columns, you can now use them in SELECT queries. If you do not need INSERT, UPDATES, or DELETEs, you can remove the rsb:script elements for POST, MERGE, and DELETE respectively.

To use the schema file with any CData Data Provider, set the Location connection property to the folder containing this file.

Note that the default Id will be a URL of the OData item. If you have an Id field defined in the m:properties element for the service that you would rather use, you can set the "Use Id URL" connection string property to False.

Add INSERT, UPDATE, and DELETE Support

The Microsoft service in the example does not support INSERT, UPDATE, or DELETE because it does not provide a category node. For data sources that assign category elements to entries, you can get this support by setting the _input.entityname and _input.schemanamespace inputs. These values will be hard-coded for the particular table. Set these inputs to the following values:

  • _input.entityname: Set this input to the name of the table.
  • _input.schemanamespace: Set this input to the category element for an entry from the table you are interested in. If the Microsoft service supported data manipulation queries, you could get the schema namespace by searching for a category node at the following URL: http://odata.research.microsoft.com/odata.svc/Downloads. The value for the Microsoft Download table would be "OData.Models.Download".

Once you have these values, set the following two lines directly after the closing tag but before the rsb:script method="GET" tag:

<rsb:set attr="_input.entityname" value="Downloads" />
<rsb:set attr="_input.schemanamespace" value="OData.Models.Download" />

How to Access Data from a SharePoint List Based on a Custom View

$
0
0

Originally posted on: http://geekswithblogs.net/dataintegration/archive/2015/06/26/how-to-access-data-from-a-sharepoint-list-based-on-again.aspx

The CData ADO.NET Providers for SharePoint you to integrate live SharePoint data with other applications. For example, Visual Studio provides built-in support for ADO.NET data sources. This article shows how to use Server Explorer in Visual Studio to access the data screened by a SharePoint custom view.

Implementing Access Control with a Database Query

The CData ADO.NET provider's data model exposes each SharePoint list as a separate table; all custom views are available by querying the Views table. You can implement the settings for the custom view that you defined in SharePoint by selecting data with the ID of the custom view:

  1. If you have not already done so, establish a connection to SharePoint: In Server Explorer, right-click the Data Sources node and click Add Connection.

    See the "Getting Started" guide chapter in the help documentation for a guide to the required connection properties and how to set them in Server Explorer.

  2. Retrieve the unique identifier for the custom view: Query the Views table and specify the custom list for the view. For example, the query below retrieves all custom views for the custom list MyCustomList:
    SELECT * FROM Views WHERE List='MyCustomList'
  3. Query the custom list table to retrieve the custom view. Specify the ViewId in the WHERE clause. For example, to retrieve a custom view from MyCustomList, use the following statement:
    SELECT * FROM MyCustomList WHERE ViewId='your-ViewId'

Thank you Chrome for the Task Manager

Customize Authentication Header in SwaggerUI using Swashbuckle

$
0
0

Originally posted on: http://geekswithblogs.net/michelotti/archive/2015/06/26/customize-authentication-header-in-swaggerui-using-swashbuckle.aspx

Swagger has quickly established itself as an important tool for building Web API’s for any platform. Swagger enables interactive documentation and client SDK generation/discoverability. One of the most frequently used Swagger tools is Swagger UI. Swagger UI provides automatically generated HTML assets that give you automatic documentation and even an online test tool. To see Swagger UI in action, check out their demo page.

Although Swagger/Swagger UI can be used for any platform, the Swashbuckle library makes integrating Swagger UI into a .NET Web API app a breeze. In fact, Azure API Apps specifically leverage Swagger via Swashbuckle to provide the metadata for Azure API apps.

Most of the out of the box features of Swagger work great. However, there are times when you need to customize this behavior. For example, by default Swagger UI gives you a textbox for for the “API key”. When you execute the request, it simply puts this API key into a query string variable called “api_key” as shown in the screen shot:

But what do you do if you need some other type of authentication? Perhaps you need Basic Auth or suppose the API key needs to be sent in an HTTP header rather than the query string.

Most of the online resources I found, suggest that you should simply replace the default web page by copying the original and making the changes you need. While it’s great to have this type of flexibility, the problem is that it makes it harder to keep up when new versions come out. You’d have to continually update your code (that you’ve now taken ownership of) each time a new version comes out.

What I want to do instead is to simply inject some JavaScript into the page to make this happen. This JavaScript needs to:

  • Add two textboxes to the page (one for username and one for password)
  • Remove/hide the existing textbox for API key
  • Set the Password Authorization (basic auth) header to use these values

Here are the steps to make that happen.

First, make sure the “Swashbuckle” and “Swashbuckle.Core” NuGet packages are added to your project. If you’re working in an Azure API app, they’ll already be added for you.

Next, add a new JavaScript file to your project. I’ll put this file in a folder called “CustomContent”:

Right-click this new JavaScript file and select “Properties”. Then change its “Build Action” to “Embedded Resource”.

Next, go to the SwaggerConfig.cs file that was added to your project when you added the Swashbuckle NuGet package. This file contains a ton of commented code – this is just to show you example configuration code that you can use. If you scroll down that file, you’ll see a commented method call to the InjectJavaScript() method. Right below that, I can now add this line of C# code:

   1:  c.InjectJavaScript(thisAssembly, "SwashbuckleCustomAuth.CustomContent.basic-auth.js");

Pay close attention to this string. The default namespace for my project happens to be “SwashbuckleCustomAuth”. Then I’ve put this JavaScript file in a folder called “CustomContent”. Finally, I give the name of the JS file. Make sure you’ve got this string correct so it will find the embedded resource properly.

Next, let’s have a look at the JavaScript code that we need to put in to basic-auth.js:

   1:  (function () {
   2:      $(function () {
   3:  var basicAuthUI =
   4:  '<div class="input"><input placeholder="username" id="input_username" name="username" type="text" size="10"/></div>' +
   5:  '<div class="input"><input placeholder="password" id="input_password" name="password" type="password" size="10"/></div>';
   6:          $(basicAuthUI).insertBefore('#api_selector div.input:last-child');
   7:          $("#input_apiKey").hide();
   8:   
   9:          $('#input_username').change(addAuthorization);
  10:          $('#input_password').change(addAuthorization);
  11:      });
  12:   
  13:      function addAuthorization() {
  14:  var username = $('#input_username').val();
  15:  var password = $('#input_password').val();
  16:  if (username && username.trim() != ""&& password && password.trim() != "") {
  17:  var basicAuth = new SwaggerClient.PasswordAuthorization('basic', username, password);
  18:              window.swaggerUi.api.clientAuthorizations.add("basicAuth", basicAuth);
  19:              console.log("authorization added: username = " + username + ", password = " + password);
  20:          }
  21:      }
  22:  })();

jQuery is already part of the HTML page so we can leverage that here. I use JavaScript to create the two new textboxes I need – then I insert them as the last child of the “api_selector” form that contains these textboxes in the header. I then hide the default api key textbox since we won’t be using it. The next thing we need to ensure is that we’re setting the header correctly when the values in these textboxes change. For this we can use the SwaggerClient.PasswordAuthorization which is built into the Swagger JavaScript library. This code will ensure it’s added to the request:

In the screen shot above, I put in a username of “steve” and a password of “123”. This time when I invoke the request, you can see an Authorization header for Basic auth being sent in the HTTP request headers. Problem solved.

Alternatively, let’s say that instead of Basic Auth, you want the API key sent in the header rather than in the query string. Furthermore, let’s say you need that API header to be called “my-cool-api-key”. In that case, we don’t need to add any of our own textboxes but just repurpose the API key text box that is already there. We can add a new JavaScript file and make it an embedded resource in the exact same way that I’ve previously described. The JavaScript for that will look like this:

   1:  (function () {
   2:      $(function () {
   3:          $('#input_apiKey').off();
   4:          $('#input_apiKey').on('change', function () {
   5:  var key = this.value;
   6:  if (key && key.trim() !== '') {
   7:                  swaggerUi.api.clientAuthorizations.add("key", new SwaggerClient.ApiKeyAuthorization("my-cool-api-key", key, "header"));
   8:              }
   9:          });
  10:      });
  11:  })();

One other point that can sometimes lead to confusion. You might notice that the Swashbuckle configuration contains methods like this:

   1:  c.BasicAuth("basic").Description("Basic HTTP Authentication");

At first glance you might think/hope that will make the UI do the Basic Authentication for you – but it doesn’t. That simply changes the metadata that comes out of the Swagger schema that *informs* the user what type of authentication is being used. But if you actually want the Swagger UI to correctly execute the appropriate authentication scheme, follow the steps in this blog post if you’re working in .NET projects.

A sample solution containing these techniques (minus the *actual* authentication) can be downloaded here.

SSD upgrade in a MacBook Pro

$
0
0

Originally posted on: http://geekswithblogs.net/kjones/archive/2015/06/28/165381.aspx

MacBookProMid2012

I recently upgraded my wife’s MacBook Pro by replacing the original hard drive with an SSD.  The performance improvements were just as good as when I did this same upgrade for Lenovo laptop in the family last fall.

Her MacBook has 8GB RAM, a 2.3 Ghz i7 processor, and a 15” inch screen.  It was a low end MacBook Pro when we purchased it in July 2013.

The SSD was another Crucial drive from NewEgg.

CrucialM500

I timed a few operations to see what they were before and after the upgrade:

Before upgrade:

  • Seconds to login prompt 47.8
  • Seconds to login 9.4
  • Seconds to launch safari 13.8
  • Seconds to launch outlook 18.4

After upgrade:

  • Seconds to login prompt 44.6
  • Seconds to login 4.3
  • Seconds to launch safari 2.1
  • Seconds to launch Outlook 2.6

The key measurements for my wife was the amount of time it was taking to launch applications.  I timed Safari and Outlook, since these are her primary apps to use and the results are amazing.  In fact, she laughed at how fast they launched when I first showed her.

How to programmatically upload a new Azure Management Certificate

$
0
0

Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2015/06/29/how-to-programmatically-upload-a-new-azure-management-certificate.aspx

If you want to use the Azure Resource Management APIs, Service Management APIs or Management Libraries, you'll need a management certificate which authenticates your process with the Azure subscription you're accessing. Chances are you'll already have one or more management certs installed (from Visual Studio or the PowerShell SDK or WebMatrix) with the issuer name Windows Azure Tools:

image

It's a good idea to mint a separate one for your new process, so it's clear what the cert is used for. Creating a new cert is easy with the makecert utility.

This command creates a valid X.509 cert, using SHA1 and a 2048 bit key, installing it to your local certificate store and exporting the public key to a .cer file:

makecert -sky exchange -r -n "CN=my.mgt.app.name" -pe -a sha1 -len 2048 -ss My my-mgmt-app.cer

(See Alice Waddicor's post Generating and using a certificate to authorise Azure Automation for a description of each parameter).

That's the easy part, and you can upload the certificate to your subscription through the Azure Portal, allowing API access to anyone who has the private key for the cert (which you'll need to manage carefully).

If you want to upload the .cer file programtically, I've wrapped it up in a console app on github here: sixeyed/azure-tools/UploadManagementCertificate

The app uses the Microsoft.WindowsAzure.Management.Libraries NuGet package. The upload is easy enough, loading the cert details into ManagementCertificateCreateParameters from an X509Certificate2 object:

var newCertificate = new X509Certificate2(newCertificateCerFilePath); 
var parm = new ManagementCertificateCreateParameters() 
{ 
    Data = newCertificate.RawData, 
    PublicKey = newCertificate.GetPublicKey(), 
    Thumbprint = newCertificate.Thumbprint 
};

And then calling Create on a ManagementClient object - note that the client object throws a CloudException even if the call succeeds and returns the expected 201: Created result (which looks like a bug):

var creds = new CertificateCloudCredentials(subscriptionId, existingCertificate); 
var client = new ManagementClient(creds); 
try 
{ 
    var response = client.ManagementCertificates.Create(parm); 
} 
catch (CloudException ex) 
{ 
    success = ex.Response.StatusCode == HttpStatusCode.Created; 
}

To use the Management SDK and upload a cert, you need to have a management certificate already uploaded to your subscription for the SDK to use, but you can make use of the existing 'Windows Azure Tools' cert you'll have on a dev box. You could have many of those installed, and the only way to check if one is valid for your subscription is to try and use it. It's crude but here's how you find which of the existing certs in your local store is valid for a subscription:

var certStore = new X509Store(StoreName.My, StoreLocation.CurrentUser); 
certStore.Open(OpenFlags.ReadOnly); 
var azureCerts = certStore.Certificates.Find(X509FindType.FindByIssuerName, "Windows Azure Tools", false); 
foreach (var cert in azureCerts) 
{ 
    var creds = new CertificateCloudCredentials(subscriptionId, cert); 
    var client = new ManagementClient(creds); 
    try 
    { 
        var v = client.Locations.List(); 
        return cert; 
    } 
    catch (CloudException ex) 
    { } 
}

Incidentally, don't try and do this with PowerShell. You can use the SDK libraries and create the objects and this snippet looks like it should work:

$creds = New-Object -TypeName Microsoft.Azure.CertificateCloudCredentials -ArgumentList $subscription.SubscriptionId, $cert 
$client = New-Object -TypeName "Microsoft.WindowsAzure.Management.ManagementClient, Microsoft.WindowsAzure.Management, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" $creds 
try 
{ 
    #doesn't work - throws 'ManagementClient requires a WebRequestHandler in its HTTP pipeline to work with client certificates.' 
    $locs = [Microsoft.WindowsAzure.Management.LocationOperationsExtensions]::List($client.Locations) 
    $existingThumbprint = $cert.Thumbprint 
} 
catch 
{ 
    #ignore, cert not valid for sub 
}

- but it doesn't. When you run it you'll get an exception -

New-Object : Exception calling ".ctor" with "1" argument(s): "ManagementClient requires a WebRequestHandler in its HTTP pipeline to work with client certificates."'.

Good luck getting past that. When I hit it I went for the console app route...

Hands-on Labs of Azure Machine Learning

$
0
0

Originally posted on: http://geekswithblogs.net/Jialiang/archive/2015/06/30/hands-on-labs-of-azure-machine-learning.aspx

Deploying a Model with Azure Machine Learning

This lab explores unsupervised learning in Azure Machine Learning and how to deploy a predictive model as a web service. The lab will walk through copying an experiment from the Azure Machine Learning Gallery into the ML Studio, creating a scoring experiment, deploying a model as a web service, and interacting with the API using the included web interface.

“Where should I open my next restaurant location?” This question is often very difficult to answer. The right choice could lead to increased revenue and profit, but the wrong choice could lead to losing a major investment. Trying to make this decision by manually sifting through hundreds or even thousands of possible cities or neighborhoods can be almost impossible. Machine learning can help with this task by analyzing large volumes of data about different locations, finding common characteristics among locations, and grouping those like-attributed locations together. These groups can then be compared to previously successful restaurant locations to help narrow the choices for where to open next. In this lab, you will work with a dataset that includes geographic, economic, and demographic data about different US cities. The model you will explore uses a K-Means algorithm to cluster cities into distinctive buckets.

 

Text Mining with R and Azure Machine Learning

This lab explores text analytics and R integration with Azure Machine Learning. It will walk through loading data from an external source, using R scripts in ML Studio, and common text analytics tasks and visualizations.

Social media has become a very influential platform for companies, consumers, and professionals to express ideas and opinions, market new products and advertise sales, or share any other important news and information.  Most social media sites include keywords or hashtags users can post related content to.  If companies can access and perform advanced analytics on the keyword posts that are relevant to them, they can learn things such as customer sentiment, related products and companies, and who is buying products and where from. For this lab, you will be working with real Twitter data pulled from a Twitter API.  The data includes real Tweets that used the hashtag, Azure. The R language has an expansive collection of packages and functions for advanced text mining and analytics.  The lab will use R scripts that will be executed in ML Studio.  These scripts will perform data preparation, exploration, and visualization tasks common to text mining. The end result will be a visualization that provides context to frequently used terms in the analyzed Tweets.

What's Wrong with GWB

$
0
0

Originally posted on: http://geekswithblogs.net/shaunxu/archive/2015/06/30/whats-wrong-with-gwb.aspx

I started to use Geekwithblogs (a.k.a. GWB) since 2010, based on one of my friend's recommendation. I've to say during the past 5+ years I was really enjoying blogging and had published 107 posts with 380 comments. GWB provided an awesome platform where I can share my experience and discuss with a lot of talents.

 

But since last month I found my blog look strange. On May 29th I found all my categories are lost. And when I tried to create a new category it still cannot be saved. This means all my well-categorized 107 posts are messed up.

image

Several days later I found my gallery was emptied in admin page, too, even though I can access images stored there.

image

 

Well I think this is not a big issue. Maybe GWB was updating, or maybe my site was hacked. So when I found the issue on May 29th I tried to contact Jeff Julian, the staff of GWB who helped me to map blog.shaunxu.me to my blog before. But no response till now.

Then I tried to find any channels to the team of GWB, but no luck. There seems no entry or link on geekswithblogs.net, or in admin page mentions how to contact them. Finally I tried to use the "Suggest" link on geekswithblogs.net and posted an item, but still no reply till now.

image

 

Today I suddenly found my blog theme was changed. After resumed the theme I think this might be the only way to report my problem, which is to publish a post. Sorry if I border you but I really want to check what's going on with GWB? Is there anyone who is still maintaining this site?

 

Hope anyone can help me,

Shaun

All documents and related graphics, codes are provided "AS IS" without warranty of any kind.
Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

What’s New in C# 6.0: Auto-Property Initializers

$
0
0

Originally posted on: http://geekswithblogs.net/WinAZ/archive/2015/06/30/whatrsquos-new-in-c-6.0-auto-property-initializers.aspx

Today, Somasegar announced that Visual Studio 2015 will be released on July 20th, 2015. This release will also include C# 6.0. This is the first in a series of posts on the new features of C# 6.0 and I’ll cover Auto-Property Initializers. This post discusses what an Auto-Property Initializer is, how to use it, and why it can help you write code more efficiently.

What is an Auto-Property Initializer?

An auto-property initializer allows you to set the value of a property at the same time you declare it in a class. Prior to C# 6.0, you need to set the property value in a constructor. This is normally not a problem because in the simple case, you set the value in a constructor, like this, and move on:

    public class PepperoniPizza
    {
        public decimal ExtraPrice { get; set; }

        public PepperoniPizza()
        {
            ExtraPrice = 0.25m;
        }
    }

This PepperoniPizza class has an ExtraPrice property, which is initialized in the constructor. That’s fine, but a couple considerations might be that 1) you need to explicitly declare a constructor just to initialize the value, 2) alternatively, you could have created a fully-implemented property and initialized the backing store field, 3) what if you have multiple properties, and/or 4) what if you have multiple constructors causing you to duplicate the initialization. In each case, you’re doing more work as well as separating initialization from declaration. Auto-property initializers let you do the opposite and combine declaration with initialization, like this:

    public class PepperoniPizza
    {
        public decimal ExtraPrice { get; set; } = 0.25m;
    }

With auto-property initializers, declare the auto-property as normal, but also use an assignment operator, specifying the value to initialize the property with. Then end the statement with a semi-colon. While you might not see this as a huge addition to the language, combined with several other features, this supports the C# 6.0 theme of having a set of features that simplify and make coding faster.

Accessors With Different Visibility

You can initialize auto-properties that have different visibility on accessors. Here’s an example with a protected setter:

        public string Name { get; protected set; } = "Cheeze";

The accessor can also be internal, internal protected, or private. The getter can also have different visibility and you will still be able to initialize the auto-property.

Read-Only Properties

In addition to flexibility with visibility, you can also initialize read-only auto-properties. Here’s an example:

        public List<string> Ingredients { get; } = 
            new List<string> { "dough", "sauce", "cheese" };

This example also shows how to initialize a property with a complex type. Also, auto-properties can’t be write-only, so that also precludes write-only initialization.

Initialization Expressions

Although the previous example initialized with a new List<string> instance, the value is still a static reference. Here are a couple more examples of expressions you can initialize auto-properties with:

        public decimal Price2 { get; set; } = 1m + 2m;

        public double Price3 { get; set; } = Math.PI;

The Price2 initializer is an addition expression, but the values are still numeric literals. Along the same lines, Math.PI is a constant initializer for Price3.

Non-Static Initializers are Verboten!

All of the initializers you’ve seen so far evaluate to a static expression. Non-static expressions, like the ones below, will generate compiler errors:

        // compiler errors
        //public string Name2 { get; set; } = Name;
        //public decimal Price4 { get; set; } = InitMe();

        decimal InitMe() { return 5m; }

The code tries to initialize Name2 with the value of Name, another property, which won’t work. Similarly, InitMe() is an instance method that won’t compile when used as the initializer for Price4. It doesn’t matter that InitMe() returns a numeric literal. Both of these situations generate a compiler error. Here's the error message for Price4:

    A field initializer cannot reference the non-static field, method, or property 'PizzaBase.InitMe()'

Virtual Properties and Type Initialization

A virtual auto-property can be initialized too. Here’s an example:

        public virtual decimal Price { get; set; } = 3.00m;

During initialization of the containing type, Price initializes before the constructor executes. It initializes like a field, meaning that a Price property override isn’t called during auto-property initialization. If you wanted the polymorphic initialization, you should initialize Price in the base class constructor, like this:

    public abstract class PizzaBase
    {
        public string Name { get; protected set; } = "Cheeze";

        public virtual decimal Price { get; set; } = 3.00m;

        public PizzaBase(IEnumerable extraIngredients)
        {
            Price = 2.95m;
        }
    }

The abstract PizzaBase class is a base class for the PepperoniPizza class shown below. This class overrides the Price property:

    public class PepperoniPizza : PizzaBase
    {
        public decimal ExtraPrice { get; set; } = 0.25m;

        decimal price;
        public override decimal Price
        {
            get
            {
                return price;
            }

            set
            {
                price = value + .50m;
            }
        }

        public PepperoniPizza(decimal extraFees) : base(new List { "pepperoni" })
        {
            ExtraPrice += extraFees;
            Name = "Pepperoni";
        }
    }

This is a scenario that’s more elegant to demo live, but you can test this by adding a full Price property override in a derived class (as shown in PepperoniPizza above), setting the setter breakpoint, and stepping through the code with the base class auto-property initializer. The base class Price auto-property initializer executes, but doesn’t call the full Price property setter in the derived class. Next, add a statement to the base class constructor to set Price to any value (as shown in PizzaBase above), step through the code, and observe that the assignment in the base class constructor does call the derived class full Price property when executed.

Summary

You’ve now been introduced to auto-property initializers. At the simplest level, but assign a value to an auto property where it’s declared. You learned that you can initialize read-only properties and the values must evaluate to a static expression. Non-static initializers cause compiler errors. This post also explained how auto-properties initialize like fields, but you can get polymorphism by initializing the auto-property in a constructor instead. Auto-property initializers, as one of a set of new features, are a new tool to help you write simpler code and save a few extra keystrokes.

@JoeMayo


Connect to Salesforce Data as a Linked Server

$
0
0

Originally posted on: http://geekswithblogs.net/dataintegration/archive/2015/06/30/connect-to-salesforce-data-as-a-linked-server.aspx

Use the TDS Remoting feature of the ODBC Driver to set up a linked server for Salesforce data.

You can use the TDS Remoting feature to set up a linked server for Salesforce data. After you have started the daemon, you can use the UI in SQL Server Management Studio or call stored procedures to create the linked server. You can then work with Salesforce data just as you would a a linked SQL Server instance.

Configure the DSN

If you have not already done so, specify connection properties in a DSN (data source name). You can use the Microsoft ODBC Data Source Administrator to create and configure ODBC DSNs. This is the last step of the driver installation. See the "Getting Started" chapter in the help documentation for a guide to setting the required properties in the Microsoft ODBC Data Source Administrator.

Configure the TDS Daemon

The TDS Remoting feature of the ODBC driver enables you to create a linked server for Salesforce. The ODBC driver runs a daemon as a service that listens for TDS requests from clients. The daemon can be configured in a configuration settings file and through the CLI (command-line interface). Follow the steps below to use the configuration settings file to configure the DSN, SSL, access control, and other settings:

  1. Open the CData.ODBC.Salesforce.Remoting.ini file, located in the remoting subfolder in the installation directory.
  2. In the tdsd section, configure the settings for the TDS server:

    [tdsd]
    port = 1434
    maxc = 20
    session-timeout = 20
    logfile = SalesforceRemotingLog.txt
    verbosity = 2
    ssl-cert = "CData.ODBC.Salesforce.Remoting.pfx"
    ssl-subject = "*"
    ssl-password = "test"

    Note: By default, the daemon runs on port 1433, the default SQL Server port. If you already have SQL Server running on port 1433, change the default value for the port.

  3. In the databases section, define the catalog name and set it to the DSN:

    [databases]
    ;The database settings, default to installed system DSN name, odbc connection string is acceptable also.
    CDataSalesforce = DSN=CData Salesforce Source
  4. In the acl section, add users that are allowed to connect to the linked server:

    [acl]
    CDataSalesforce = admin
  5. In the users section, define passwords for authorized users. Below are the default values:

    [users]
    ;Passwords
    admin = test
  6. Start the service the daemon is running under. You can start the service from the Services Snap-In: Click Start -> Run and enter services.msc. Right-click the CData Salesforce TDS Remoting service and click Start.

Create a Linked Server for Salesforce Data

After you have configured and started the daemon, create the linked server and connect. You can use the UI in SQL Server Management Studio or call stored procedures.

Create a Linked Server from the UI

Follow the steps below to create a linked server from the Object Explorer.

  1. Open SQL Server Management Studio and connect to an instance of SQL Server.
  2. In the Object Explorer, expand the node for the SQL Server database. In the Server Objects node, right-click Linked Servers and click New Linked Server. The New Linked Server dialog is displayed.
  3. In the General section, click the Other Data Source option and enter the following information after naming the linked server:
    • Provider: Select SQL Server Native Client 10.0 in the menu.
    • Product Name: Enter a name for the data source.
    • Data Source: Enter the host and port the daemon is running on.
    • Provider String: Enter the following connection string:
      Network Library=DBMSSOCN;
    • Catalog: Enter the catalog you defined in the databases section of the configuration settings file.
  4. In the Security section, select the Be Made Using this Security Context option and enter the username and password of a user you authorized in the acl section of the configuration settings file.

Create a Linked Server Programmatically

In addition to using the SQL Server Management Studio UI to create a linked server, you can use stored procedures:

  1. Call sp_addlinkedserver to create the linked server:

    EXEC sp_addlinkedserver @server='Salesforce',
    @srvproduct='CData.Salesforce.ODBC.Driver',
    @provider='SQLNCLI10',
    @datasrc='localhost,1434',
    @provstr='Network Library=DBMSSOCN;',
    @catalog='CDataSalesforce';
    GO
  2. Call sp_droplinkedsvrlogin to remove the default mappings created by sp_addlinkedserver. By default, sp_addlinkedserver maps all local logins to the linked server.

    EXEC sp_droplinkedsrvlogin @rmtsrvname='Salesforce',
    @locallogin=NULL;
    GO
  3. Call the sp_addlinkedsrvlogin stored procedure to allow SQL Server users to connect with the credentials of an authorized user of the daemon. Note that the credentials you use to connect to the daemon must exist in the daemon's configuration settings file.

    EXEC sp_addlinkedsrvlogin @rmtsrvname='Salesforce',
    @rmtuser='admin',
    @rmtpassword='test',
    @useself='FALSE',
    @locallogin='YOUR-DOMAIN\your-user';
    GO

Connect from SQL Server Management Studio

SQL Server Management Studio uses the SQL Server Client OLE DB provider which requires the ODBC driver to be used inprocess. You must enable the 'Allow inprocess' option for the SQL Server Client Provider in Management Studio to query the linked server from SQL Server Management Studio. To do this, open the properties for the provider you are using under Server Objects -> Linked Servers -> Providers. Check the 'allow inprocess' option and save the changes.

Execute Queries

You can now execute queries to the Salesforce linked server from any tool that can connect to SQL Server. Set the table name accordingly:

SELECT * FROM [linked server name].[CDataSalesforce].[Salesforce].[Account]

Server 2016 – How to add or remove windows features (including GUI)

$
0
0

Originally posted on: http://geekswithblogs.net/Wchrabaszcz/archive/2015/07/01/server-2016--how-to-add-or-remove-windows-features.aspx

If you try to install Windows Server 2016 Technical Preview 2, you'll realize that Server Core is the default and recommended choice. Of course you can choose a server with GUI, but in many situation someone else builds severs for you, or in a long term you would like to host your services on Core.

And in most cases after successful installation you will have to start to work here:

It looks like scripting languages takes the market share from GUI. What's more, to reduce the footprint of Server Core installation, Microsoft removed GUI related sources from %windir%\SxS\ folder. If you want to add graphical interface, you will need to mount an image of GUI server. If you are experienced in packaging, software distribution or just the old fashion administration you may prefer to use DISM.exe to add graphical interface to your server:

Rem mount full server image to upgrade Core into GUI version
mkdir c:\w2016
dism /mount-image /imageFile:D:\sources\install.wim /index:2 /mountDir:c:\w2016 /readonly
rem to install server manager and MMC consoles only
Dism /online /Enable-Feature /FeatureName:Server-Gui-Mgmt /all /source:c:\w2016\windows\ /quiet
rem for GUI experience with Windows Explorer
Dism /online /Enable-Feature /FeatureName:Server-Gui-Shell /all /source:c:\w2016\windows\ /quiet
rem for desktop experience with Media Player and desktop themes
Dism /online /Enable-Feature /FeatureName:DesktopExperience /all /source:c:\w2016\windows\ /quiet
rem unmount the image
DISM.exe /Unmount-Image /MountDir:C:\w2016 /Discard

But if you are application, virtualization or automation engineer you don't care about GUI, you're not going to logon there at all:

Powershell
Add-WindowsFeature -ComputerName Server1,Server2,Server3 -name Hyper-V -IncludeAllSubFeature
Remove-WindowsFeature -ComputerName Server5,Server4 -name Web-Server -IncludeManagementTools

Everything works fine, except those little differences in component names. C'mon M$, for what propose you created SharePoint, Exchange and Lync … internal communication maybe?

DISM

PowerShell

Server-Gui-Mgmt

Server-Gui-Mgmt-Infra

DesktopExperience

Desktop-Experience

 

And the list of all Windows Server 2016 features

Name

DisplayName

Description

AD-Certificate

Active Directory Certificate Services

Active Directory Certificate Services (AD CS) is used to create certification authorities and related role services that allow you to issue and manage certificates used in a variety of applications.

ADCS-Cert-Authority

Certification Authority

Certification Authority (CA) is used to issue and manage certificates. Multiple CAs can be linked to form a public key infrastructure.

ADCS-Enroll-Web-Pol

Certificate Enrollment Policy Web Service

The Certificate Enrollment Policy Web Service enables users and computers to obtain certificate enrollment policy information even when the computer is not a member of a domain or if a domain-joined computer is temporarily outside the security boundary of the corporate network. The Certificate Enrollment Policy Web Service works with the Certificate Enrollment Web Service to provide policy-based automatic certificate enrollment for these users and computers.

ADCS-Enroll-Web-Svc

Certificate Enrollment Web Service

The Certificate Enrollment Web Service enables users and computers to enroll for and renew certificates even when the computer is not a member of a domain or if a domain-joined computer is temporarily outside the security boundary of the computer network. The Certificate Enrollment Web Service works together with the Certificate Enrollment Policy Web Service to provide policy-based automatic certificate enrollment for these users and computers.

ADCS-Web-Enrollment

Certification Authority Web Enrollment

Certification Authority Web Enrollment provides a simple Web interface that allows users to perform tasks such as request and renew certificates, retrieve certificate revocation lists (CRLs), and enroll for smart card certificates.

ADCS-Device-Enrollment

Network Device Enrollment Service

Network Device Enrollment Service makes it possible to issue and manage certificates for routers and other network devices that do not have network accounts.

ADCS-Online-Cert

Online Responder

Online Responder makes certificate revocation checking data accessible to clients in complex network environments.

AD-Domain-Services

Active Directory Domain Services

Active Directory Domain Services (AD DS) stores information about objects on the network and makes this information available to users and network administrators. AD DS uses domain controllers to give network users access to permitted resources anywhere on the network through a single logon process.

ADFS-Federation

Active Directory Federation Services

Active Directory Federation Services (AD FS) provides simplified, secured identity federation and Web single sign-on (SSO) capabilities. AD FS includes a Federation Service that enables browser-based Web SSO.

ADLDS

Active Directory Lightweight Directory Services

Active Directory Lightweight Directory Services (AD LDS) provides a store for application-specific data, for directory-enabled applications that do not require the infrastructure of Active Directory Domain Services. Multiple instances of AD LDS can exist on a single server, each of which can have its own schema.

ADRMS

Active Directory Rights Management Services

Active Directory Rights Management Services (AD RMS) helps you protect information from unauthorized use. AD RMS establishes the identity of users and provides authorized users with licenses for protected information.

ADRMS-Server

Active Directory Rights Management Server

Active Directory Rights Management Services (AD RMS) helps you protect information from unauthorized use. AD RMS establishes the identity of users and provides authorized users with licenses for protected information.

ADRMS-Identity

Identity Federation Support

Identity Federation Support leverages federated trust relationships between your organization and other organizations to establish user identities and provide access to protected information created by either organization. For example, a trust created with Active Directory Federation Services can be used to establish user identities for AD RMS.

DHCP

DHCP Server

Dynamic Host Configuration Protocol (DHCP) Server enables you to centrally configure, manage, and provide temporary IP addresses and related information for client computers.

DNS

DNS Server

Domain Name System (DNS) Server provides name resolution for TCP/IP networks. DNS Server is easier to manage when it is installed on the same server as Active Directory Domain Services. If you select the Active Directory Domain Services role, you can install and configure DNS Server and Active Directory Domain Services to work together.

Fax

Fax Server

Fax Server sends and receives faxes and allows you to manage fax resources such as jobs, settings, reports, and fax devices on this computer or on the network.

FileAndStorage-Services

File and Storage Services

File and Storage Services includes services that are always installed, as well as functionality that you can install to help manage file servers and storage.

File-Services

File and iSCSI Services

File and iSCSI Services provides technologies that help you manage file servers and storage, reduce disk space utilization, replicate and cache files to branch offices, move or fail over a file share to another cluster node, and share files by using the NFS protocol.

FS-FileServer

File Server

File Server manages shared folders and enables users to access files on this computer from the network.

FS-BranchCache

BranchCache for Network Files

BranchCache for Network Files provides support for BranchCache on this file server. BranchCache is a wide area network (WAN) bandwidth optimization technology that caches content from your main office content servers at branch office locations, allowing client computers at branch offices to access the content locally rather than over the WAN. After you complete installation, you must share folders and enable hash generation for shared folders by using Group Policy or Local Computer Policy.

FS-Data-Deduplication

Data Deduplication

Data Deduplication saves disk space by storing a single copy of identical data on the volume.

FS-DFS-Namespace

DFS Namespaces

DFS Namespaces enables you to group shared folders located on different servers into one or more logically structured namespaces. Each namespace appears to users as a single shared folder with a series of subfolders. However, the underlying structure of the namespace can consist of numerous shared folders located on different servers and in multiple sites.

FS-DFS-Replication

DFS Replication

DFS Replication is a multimaster replication engine that enables you to synchronize folders on multiple servers across local or wide area network (WAN) network connections. It uses the Remote Differential Compression (RDC) protocol to update only the portions of files that have changed since the last replication. DFS Replication can be used in conjunction with DFS Namespaces, or by itself.

FS-Resource-Manager

File Server Resource Manager

File Server Resource Manager helps you manage and understand the files and folders on a file server by scheduling file management tasks and storage reports, classifying files and folders, configuring folder quotas, and defining file screening policies.

FS-VSS-Agent

File Server VSS Agent Service

File Server VSS Agent Service enables you to perform volume shadow copies of applications that store data files on this file server.

FS-iSCSITarget-Server

iSCSI Target Server

iSCSI Target Server provides services and management tools for iSCSI targets.

iSCSITarget-VSS-VDS

iSCSI Target Storage Provider (VDS and VSS hardware providers)

iSCSI Target Storage Provider enables applications on a server that are connected to an iSCSI target to perform volume shadow copies of data on iSCSI virtual disks. It also enables you to manage iSCSI virtual disks by using older applications that require a Virtual Disk Service (VDS) hardware provider, such as the Diskraid command.

FS-NFS-Service

Server for NFS

Server for NFS enables this computer to share files with UNIX-based computers and other computers that use the network file system (NFS) protocol.

FS-SyncShareService

Work Folders

Work Folders provides a way to use work files from a variety of computers, including work and personal devices. You can use Work Folders to host user files and keep them synchronized - whether users access their files from inside the network or from across the Internet.

Storage-Services

Storage Services

Storage Services provides storage management functionality that is always installed and cannot be removed.

HostGuardianServiceRole

Host Guardian Service

The Host Guardian Service (HGS) server role provides the Attestation & Key Protection services that enable Guarded Hosts to run Shielded virtual machines. The Attestation service validates Guarded Host identity & configuration. The Key Protection service enables distributed access to encrypted transport keys to enable Guarded Hosts to unlock and run Shielded virtual machines.

Hyper-V

Hyper-V

Hyper-V provides the services that you can use to create and manage virtual machines and their resources. Each virtual machine is a virtualized computer system that operates in an isolated execution environment. This allows you to run multiple operating systems simultaneously.

MultiPointServerRole

MultiPoint Services

MultiPoint Services allows multiple users, each with their own independent and familiar Windows experience, to simultaneously share one computer.

NetworkController

Network Controller

The Network Controller provides the point of automation needed for continual configuration, monitoring and diagnostics of virtual networks, physical networks, network services, network topology, address management, etc. within a datacenter stamp.

NPAS

Network Policy and Access Services

Network Policy and Access Services provides Network Policy Server (NPS), which helps safeguard the security of your network.

Print-Services

Print and Document Services

Print and Document Services enables you to centralize print server and network printer management tasks. With this role, you can also receive scanned documents from network scanners and route the documents to a shared network resource, Windows SharePoint Services site, or e-mail addresses.

Print-Server

Print Server

Print Server includes the Print Management snap-in, which is used for managing multiple printers or print servers and migrating printers to and from other Windows print servers.

Print-Scan-Server

Distributed Scan Server

Distributed Scan Server provides the service which receives scanned documents from network scanners and routes them to the correct destinations. It also includes the Scan Management snap-in, which you can use to manage network scanners and configure scan processes.

Print-Internet

Internet Printing

Internet Printing creates a Web site where users can manage print jobs on the server. It also enables users who have Internet Printing Client installed to use a Web browser to connect and print to shared printers on this server by using the Internet Printing Protocol (IPP).

Print-LPD-Service

LPD Service

Line Printer Daemon (LPD) Service enables UNIX-based computers or other computers using the Line Printer Remote (LPR) service to print to shared printers on this server.

RemoteAccess

Remote Access

Remote Access provides seamless connectivity through DirectAccess, VPN, and Web Application Proxy. DirectAccess provides an Always On and Always Managed experience. RAS provides traditional VPN services, including site-to-site (branch-office or cloud-based) connectivity. Web Application Proxy enables the publishing of selected HTTP- and HTTPS-based applications from your corporate network to client devices outside of the corporate network. Routing provides traditional routing capabilities, including NAT and other connectivity options. RAS and Routing can be deployed in single-tenant or multi-tenant mode.

DirectAccess-VPN

DirectAccess and VPN (RAS)

DirectAccess gives users the experience of being seamlessly connected to their corporate network any time they have Internet access. With DirectAccess, mobile computers can be managed any time the computer has Internet connectivity, ensuring mobile users stay up-to-date with security and system health policies. VPN uses the connectivity of the Internet plus a combination of tunnelling and data encryption technologies to connect remote clients and remote offices.

Routing

Routing

Routing provides support for NAT Routers, LAN Routers running BGP, RIP, and multicast capable routers (IGMP Proxy).

Web-Application-Proxy

Web Application Proxy

Web Application Proxy enables the publishing of selected HTTP- and HTTPS-based applications from your corporate network to client devices outside of the corporate network. It can use AD FS to ensure that users are authenticated before they gain access to published applications. Web Application Proxy also provides proxy functionality for your AD FS servers.

Remote-Desktop-Services

Remote Desktop Services

Remote Desktop Services enables users to access virtual desktops, session-based desktops, and RemoteApp programs. Use the Remote Desktop Services installation to configure a Virtual machine-based or a Session-based desktop deployment.

RDS-Connection-Broker

Remote Desktop Connection Broker

Remote Desktop Connection Broker (RD Connection Broker) allows users to reconnect to their existing virtual desktops, RemoteApp programs, and session-based desktops. It enables even load distribution across RD Session Host servers in a session collection or across pooled virtual desktops in a pooled virtual desktop collection, and provides access to virtual desktops in a virtual desktop collection.

RDS-Gateway

Remote Desktop Gateway

Remote Desktop Gateway (RD Gateway) enables authorized users to connect to virtual desktops, RemoteApp programs, and session-based desktops on the corporate network or over the Internet.

RDS-Licensing

Remote Desktop Licensing

Remote Desktop Licensing (RD Licensing) manages the licenses required to connect to a Remote Desktop Session Host server or a virtual desktop. You can use RD Licensing to install, issue, and track the availability of licenses.

RDS-RD-Server

Remote Desktop Session Host

Remote Desktop Session Host (RD Session Host) enables a server to host RemoteApp programs or session-based desktops. Users can connect to RD Session Host servers in a session collection to run programs, save files, and use resources on those servers. Users can access an RD Session Host server by using the Remote Desktop Connection client or by using RemoteApp programs.

RDS-Virtualization

Remote Desktop Virtualization Host

Remote Desktop Virtualization Host (RD Virtualization Host) enables users to connect to virtual desktops by using RemoteApp and Desktop Connection.

RDS-Web-Access

Remote Desktop Web Access

Remote Desktop Web Access (RD Web Access) enables users to access RemoteApp and Desktop Connection through the Start menu or through a web browser. RemoteApp and Desktop Connection provides users with a customized view of RemoteApp programs, session-based desktops, and virtual desktops.

VolumeActivation

Volume Activation Services

Volume Activation Services enables you to automate and simplify the management of Key Management Service (KMS) host keys and the volume key activation infrastructure for a network. With this service you can install and manage a KMS host, or configure Microsoft Active Directory-Based Activation to provide volume activation for domain-joined systems.

Web-Server

Web Server (IIS)

Web Server (IIS) provides a reliable, manageable, and scalable Web application infrastructure.

Web-WebServer

Web Server

Web Server provides support for HTML Web sites and optional support for ASP.NET, ASP, and Web server extensions. You can use the Web Server to host an internal or external Web site or to provide an environment for developers to create Web-based applications.

Web-Common-Http

Common HTTP Features

Common HTTP Features supports basic HTTP functionality, such as delivering standard file formats and configuring custom server properties. Use Common HTTP Features to create custom error messages, to configure how the server responds to requests that do not specify a document, or to automatically redirect some requests to a different location.

Web-Default-Doc

Default Document

Default Document lets you configure a default file for the Web server to return when users do not specify a file in a request URL. Default documents make it easier and more convenient for users to reach your Web site.

Web-Dir-Browsing

Directory Browsing

Directory Browsing lets users to see the contents of a directory on your Web server. Use Directory Browsing to enable an automatically generated list of all directories and files available in a directory when users do not specify a file in a request URL and default documents are either disabled or not configured.

Web-Http-Errors

HTTP Errors

HTTP Errors allows you to customize the error messages returned to users' browsers when the Web server detects a fault condition. Use HTTP errors to provide users with a better user experience when they run up against an error message. Consider providing users with an e-mail address for staff who can help them resolve the error.

Web-Static-Content

Static Content

Static Content allows the Web server to publish static Web file formats, such as HTML pages and image files. Use Static Content to publish files on your Web server that users can then view using a Web browser.

Web-Http-Redirect

HTTP Redirection

HTTP Redirection provides support to redirect user requests to a specific destination. Use HTTP redirection whenever you want customers who might use one URL to actually end up at another URL. This is helpful in many situations, from simply renaming your Web site, to overcoming a domain name that is difficult to spell, or forcing clients to use a secure channel.

Web-DAV-Publishing

WebDAV Publishing

WebDAV Publishing (Web Distributed Authoring and Versioning) enables you to publish files to and from a Web server by using the HTTP protocol. Because WebDAV uses HTTP, it works through most firewalls without modification.

Web-Health

Health and Diagnostics

Health and Diagnostics provides infrastructure to monitor, manage, and troubleshoot the health of Web servers, sites, and applications.

Web-Http-Logging

HTTP Logging

HTTP Logging provides logging of Web site activity for this server. When a loggable event, usually an HTTP transaction, occurs, IIS calls the selected logging module, which then writes to one of the logs stored in the file system of the Web server. These logs are in addition to those provided by the operating system.

Web-Custom-Logging

Custom Logging

Custom Logging provides support for logging Web server activity in a format that differs considerably from the manner in which IIS generates log files. Use custom to create your own logging module. Custom logging modules are added to IIS by registering a new COM component that implements ILogPlugin or ILogPluginEx.

Web-Log-Libraries

Logging Tools

Logging Tools provides infrastructure to manage Web server logs and automate common logging tasks.

Web-ODBC-Logging

ODBC Logging

ODBC Logging provides infrastructure that supports logging Web server activity to an ODBC-compliant database. With a logging database, you can programmatically display and manipulate data from the logging database on an HTML page. You might do this to search logs for specific events to call out user defined events that you want to monitor.

Web-Request-Monitor

Request Monitor

Request Monitor provides infrastructure to monitor Web application health by capturing information about HTTP requests in an IIS worker process. Administrators and developers can use Request Monitor to understand which HTTP requests are executing in a worker process when the worker process has become unresponsive or very slow.

Web-Http-Tracing

Tracing

Tracing provides infrastructure to diagnose and troubleshoot Web applications. With failed request tracing, you can troubleshoot difficult to capture events like poor performance, or authentication related failures. This feature buffers trace events for a request and only flushes them to disk if the request falls into a user-configured error condition.

Web-Performance

Performance

Performance provides infrastructure for output caching by integrating the dynamic output-caching capabilities of ASP.NET with the static output-caching capabilities that were present in IIS 6.0. IIS also lets you use bandwidth more effectively and efficiently by using common compression mechanisms such as Gzip and Deflate.

Web-Stat-Compression

Static Content Compression

Static Content Compression provides infrastructure to configure HTTP compression of static content. This allows more efficient use of bandwidth. Unlike dynamic responses, compressed static responses can be cached without degrading CPU resources.

Web-Dyn-Compression

Dynamic Content Compression

Dynamic Content Compression provides infrastructure to configure HTTP compression of dynamic content. Enabling dynamic compression always gives you more efficient utilization of bandwidth, but if your server's processor utilization is already very high, the CPU load imposed by dynamic compression might make your site perform more slowly.

Web-Security

Security

Security provides infrastructure for securing the Web server from users and requests. IIS supports multiple authentication methods. Pick an appropriate authentication scheme based upon the role of the server. Filter all incoming requests, rejecting without processing requests that match user defined values, or restrict requests based on originating address space.

Web-Filtering

Request Filtering

Request Filtering screens all incoming requests to the server and filters these requests based on rules set by the administrator. Many malicious attacks share common characteristics, like extremely long requests, or requests for an unusual action. By filtering requests, you can attempt to mitigate the impact of these type attacks.

Web-Basic-Auth

Basic Authentication

Basic authentication offers strong browser compatibility. Appropriate for small internal networks, this authentication method is rarely used on the public Internet. Its major disadvantage is that it transmits passwords across the network using an easily decrypted algorithm. If intercepted, these passwords are simple to decipher. Use SSL with Basic authentication.

Web-CertProvider

Centralized SSL Certificate Support

Centralized SSL Certificate Support enables you to manage SSL server certificates centrally using a file share. Maintaining SSL server certificates on a file share simplifies management since there is one place to manage them.

Web-Client-Auth

Client Certificate Mapping Authentication

Client Certificate Mapping Authentication uses client certificates to authenticate users. A client certificate is a digital ID from a trusted source. IIS offers two types of authentication using client certificate mapping. This type uses Active Directory to offer one-to-one certificate mappings across multiple Web servers.

Web-Digest-Auth

Digest Authentication

Digest authentication works by sending a password hash to a Windows domain controller to authenticate users. When you need improved security over Basic authentication, consider using Digest authentication, especially if users who must be authenticated access your Web site from behind firewalls and proxy servers.

Web-Cert-Auth

IIS Client Certificate Mapping Authentication

IIS Client Certificate Mapping Authentication uses client certificates to authenticate users. A client certificate is a digital ID from a trusted source. IIS offers two types of authentication using client certificate mapping. This type uses IIS to offer one-to-one or many-to-one certificate mapping. Native IIS mapping of certificates offers better performance.

Web-IP-Security

IP and Domain Restrictions

IP and Domain Restrictions allow you to enable or deny content based upon the originating IP address or domain name of the request. Instead of using groups, roles, or NTFS file system permissions to control access to content, you can specific IP addresses or domain names.

Web-Url-Auth

URL Authorization

URL Authorization allows you to create rules that restrict access to Web content. You can bind these rules to users, groups, or HTTP header verbs. By configuring URL authorization rules, you can prevent employees who are not members of certain groups from accessing content or interacting with Web pages.

Web-Windows-Auth

Windows Authentication

Windows authentication is a low cost authentication solution for internal Web sites. This authentication scheme allows administrators in a Windows domain to take advantage of the domain infrastructure for authenticating users. Do not use Windows authentication if users who must be authenticated access your Web site from behind firewalls and proxy servers.

Web-App-Dev

Application Development

Application Development provides infrastructure for developing and hosting Web applications. Use these features to create Web content or extend the functionality of IIS. These technologies typically provide a way to perform dynamic operations that result in the creation of HTML output, which IIS then sends to fulfill client requests.

Web-Net-Ext

.NET Extensibility 3.5

.NET extensibility allows managed code developers to change, add and extend web server functionality in the entire request pipeline, the configuration, and the UI. Developers can use the familiar ASP.NET extensibility model and rich .NET APIs to build Web server features that are just as powerful as those written using the native C++ APIs.

Web-Net-Ext45

.NET Extensibility 4.6

.NET extensibility allows managed code developers to change, add and extend web server functionality in the entire request pipeline, the configuration, and the UI. Developers can use the familiar ASP.NET extensibility model and rich .NET APIs to build Web server features that are just as powerful as those written using the native C++ APIs.

Web-AppInit

Application Initialization

Application Initialization perform expensive web application initialization tasks before serving web pages.

Web-ASP

ASP

Active Server Pages (ASP) provides a server side scripting environment for building Web sites and Web applications. Offering improved performance over CGI scripts, ASP provides IIS with native support for both VBScript and JScript. Use ASP if you have existing applications that require ASP support. For new development, consider using ASP.NET.

Web-Asp-Net

ASP.NET 3.5

ASP.NET provides a server side object oriented programming environment for building Web sites and Web applications using managed code. ASP.NET is not simply a new version of ASP. Having been entirely re-architected to provide a highly productive programming experience based on the .NET Framework, ASP.NET provides a robust infrastructure for building web applications.

Web-Asp-Net45

ASP.NET 4.6

ASP.NET provides a server side object oriented programming environment for building Web sites and Web applications using managed code. ASP.NET 4.6 is not simply a new version of ASP. Having been entirely re-architected to provide a highly productive programming experience based on the .NET Framework, ASP.NET provides a robust infrastructure for building web applications.

Web-CGI

CGI

CGI defines how a Web server passes information to an external program. Typical uses might include using a Web form to collect information and then passing that information to a CGI script to be emailed somewhere else. Because CGI is a standard, CGI scripts can be written using a variety of programming languages. The downside to using CGI is the performance overhead.

Web-ISAPI-Ext

ISAPI Extensions

Internet Server Application Programming Interface (ISAPI) Extensions provides support for dynamic Web content developing using ISAPI extensions. An ISAPI extension runs when requested just like any other static HTML file or dynamic ASP file. Since ISAPI applications are compiled code, they are processed much faster than ASP files or files that call COM+ components.

Web-ISAPI-Filter

ISAPI Filters

Internet Server Application Programming Interface (ISAPI) Filters provides support for Web applications that use ISAPI filters. ISAPI filters are files that can extend or change the functionality provided by IIS. An ISAPI filter reviews every request made to the Web server, until the filter finds one that it needs to process.

Web-Includes

Server Side Includes

Server Side Includes (SSI) is a scripting language used to dynamically generate HTML pages. The script runs on the server before the page is delivered to the client and typically involves inserting one file into another. You might create an HTML navigation menu and use SSI to dynamically add it to all pages on a Web site.

Web-WebSockets

WebSocket Protocol

IIS 10.0 and ASP.NET 4.6 support writing server applications that communicate over the WebSocket Protocol.

Web-Ftp-Server

FTP Server

FTP Server enables the transfer of files between a client and server by using the FTP protocol. Users can establish an FTP connection and transfer files by using an FTP client or FTP-enabled Web browser.

Web-Ftp-Service

FTP Service

FTP Service enables FTP publishing on a Web server.

Web-Ftp-Ext

FTP Extensibility

FTP Extensibility enables support for FTP extensibility features such as custom providers, ASP.NET users or IIS Manager users.

Web-Mgmt-Tools

Management Tools

Management Tools provide infrastructure to manage a Web server that runs IIS 10. You can use the IIS user interface, command-line tools, and scripts to manage the Web server. You can also edit the configuration files directly.

Web-Mgmt-Console

IIS Management Console

IIS Management Console provides infrastructure to manage IIS 10 by using a user interface. You can use the IIS management console to manage a local or remote Web server that runs IIS 10. To manage SMTP, you must install and use the IIS 6 Management Console.

Web-Mgmt-Compat

IIS 6 Management Compatibility

IIS 6 Management Compatibility provides forward compatibility for your applications and scripts that use the two IIS APIs, Admin Base Object (ABO) and Active Directory Service Interface (ADSI). You can use existing IIS 6 scripts to manage the IIS 10 Web server.

Web-Metabase

IIS 6 Metabase Compatibility

IIS 6 Metabase Compatibility provides infrastructure to query and configure the metabase so that you can run applications and scripts migrated from earlier versions of IIS that use Admin Base Object (ABO) or Active Directory Service Interface (ADSI) APIs.

Web-Lgcy-Mgmt-Console

IIS 6 Management Console

IIS 6 Management Console provides infrastructure for administration of remote IIS 6.0 servers from this computer.

Web-Lgcy-Scripting

IIS 6 Scripting Tools

IIS 6 Scripting Tools provide the ability to continue using IIS 6 scripting tools that you built to manage IIS 6 in IIS 10, especially if your applications and scripts that use ActiveX Data Objects (ADO) or Active Directory Service Interface (ADSI) APIs. IIS 6 Scripting Tools require Windows Process Activation Service Configuration API.

Web-WMI

IIS 6 WMI Compatibility

IIS 6 WMI Compatibility provides Windows Management Instrumentation (WMI) scripting interfaces to programmatically manage and automate tasks for IIS 10.0 Web server, from a set of scripts that you created in the WMI provider. This service includes the WMI CIM Studio, WMI Event Registration, WMI Event Viewer, and WMI Object Browser tools to manage sites.

Web-Scripting-Tools

IIS Management Scripts and Tools

IIS Management Scripts and Tools provide infrastructure to programmatically manage an IIS 10 Web server by using commands in a command window or by running scripts. You can use these tools when you want to automate commands in batch files or when you do not want to incur the overhead of managing IIS by using the user interface.

Web-Mgmt-Service

Management Service

Management Service allows the Web server to be managed remotely from another computer using IIS Manager.

WDS

Windows Deployment Services

Windows Deployment Services provides a simplified, secure means of rapidly and remotely deploying Windows operating systems to computers over the network.

WDS-Deployment

Deployment Server

Deployment Server provides the full functionality of Windows Deployment Services, which you can use to configure and remotely install Windows operating systems. With Windows Deployment Services, you can create and customize images and then use them to reimage computers. Deployment Server is dependent on the core parts of Transport Server.

WDS-Transport

Transport Server

Transport Server provides a subset of the functionality of Windows Deployment Services. It contains only the core networking parts, which you can use to transmit data using multicasting on a stand-alone server. You should use this role service if you want to transmit data using multicasting, but do not want to incorporate all of Windows Deployment Services.

ServerEssentialsRole

Windows Server Essentials Experience

Windows Server Essentials Experience sets up the IT infrastructure and provides powerful functions such as PC backups that helps protect data, and Remote Web Access that helps access business information from virtually anywhere. Windows Server Essentials also helps you to easily and quickly connect to cloud-based applications and services to extend the functionality of your server.

UpdateServices

Windows Server Update Services

Windows Server Update Services allows network administrators to specify the Microsoft updates that should be installed, create separate groups of computers for different sets of updates, and get reports on the compliance levels of the computers and the updates that must be installed.

UpdateServices-WidDB

WID Connectivity

Installs the database used by WSUS into WID.

UpdateServices-Services

WSUS Services

Installs the services used by Windows Server Update Services: Update Service, the Reporting Web Service, the API Remoting Web Service, the Client Web Service, the Simple Web Authentication Web Service, the Server Synchronization Service, and the DSS Authentication Web Service.

UpdateServices-DB

SQL Server Connectivity

Installs the feature that enables WSUS to connect to a Microsoft SQL Server database.

NET-Framework-Features

.NET Framework 3.5 Features

.NET Framework 3.5 combines the power of the .NET Framework 2.0 APIs with new technologies for building applications that offer appealing user interfaces, protect your customers' personal identity information, enable seamless and secure communication, and provide the ability to model a range of business processes.

NET-Framework-Core

.NET Framework 3.5 (includes .NET 2.0 and 3.0)

.NET Framework 3.5 combines the power of the .NET Framework 2.0 APIs with new technologies for building applications that offer appealing user interfaces, protect your customers' personal identity information, enable seamless and secure communication, and provide the ability to model a range of business processes.

NET-HTTP-Activation

HTTP Activation

HTTP Activation supports process activation via HTTP. Applications that use HTTP Activation can start and stop dynamically in response to work items that arrive over the network via HTTP.

NET-Non-HTTP-Activ

Non-HTTP Activation

Non-HTTP Activation supports process activation via Message Queuing, TCP and named pipes. Applications that use Non-HTTP Activation can start and stop dynamically in response to work items that arrive over the network via Message Queuing, TCP and named pipes.

NET-Framework-45-Features

.NET Framework 4.6 Features

.NET Framework 4.6 provides a comprehensive and consistent programming model for quickly and easily building and running applications that are built for various platforms including desktop PCs, Servers, smart phones and the public and private cloud.

NET-Framework-45-Core

.NET Framework 4.6

.NET Framework 4.6 provides a comprehensive and consistent programming model for quickly and easily building and running applications that are built for various platforms including desktop PCs, Servers, smart phones and the public and private cloud.

NET-Framework-45-ASPNET

ASP.NET 4.6

ASP.NET 4.6 provides core support for running ASP.NET 4.6 stand-alone applications as well as applications that are integrated with IIS.

NET-WCF-Services45

WCF Services

Windows Communication Foundation (WCF) Activation uses Windows Process Activation Service to invoke applications remotely over the network by using protocols such as HTTP, Message Queuing, TCP, and named pipes. Consequently, applications can start and stop dynamically in response to incoming work items, resulting in application hosting that is more robust, manageable, and efficient.

NET-WCF-HTTP-Activation45

HTTP Activation

HTTP Activation supports process activation via HTTP. Applications that use HTTP Activation can start and stop dynamically in response to work items that arrive over the network via HTTP.

NET-WCF-MSMQ-Activation45

Message Queuing (MSMQ) Activation

Message Queuing Activation supports process activation via Message Queuing. Applications that use Message Queuing Activation can start and stop dynamically in response to work items that arrive over the network via Message Queuing.

NET-WCF-Pipe-Activation45

Named Pipe Activation

Named Pipes Activation supports process activation via named pipes. Applications that use Named Pipes Activation can start and stop dynamically in response to work items that arrive over the network via named pipes.

NET-WCF-TCP-Activation45

TCP Activation

TCP Activation supports process activation via TCP. Applications that use TCP Activation can start and stop dynamically in response to work items that arrive over the network via TCP.

NET-WCF-TCP-PortSharing45

TCP Port Sharing

TCP Port Sharing allows multiple net.tcp applications to share a single TCP port. Consequently, these applications can coexist on the same physical computer in separate, isolated processes, while sharing the network infrastructure required to send and receive traffic over a TCP port, such as port 808.

BITS

Background Intelligent Transfer Service (BITS)

Background Intelligent Transfer Service (BITS) asynchronously transfers files in the foreground or background, controls the flow of the transfers to preserve the responsiveness of other network applications, and automatically resumes file transfers after disconnecting from the network or restarting the computer.

BITS-IIS-Ext

IIS Server Extension

IIS Server Extension allows a computer to receive files uploaded by clients that implement the BITS upload protocol.

BITS-Compact-Server

Compact Server

BITS Compact Server is a stand-alone HTTPS file server that lets you transfer a limited number of large files asynchronously between computers in the same domain or mutually-trusted domains.

BitLocker

BitLocker Drive Encryption

BitLocker Drive Encryption helps to protect data on lost, stolen, or inappropriately decommissioned computers by encrypting the entire volume and checking the integrity of early boot components. Data is only decrypted if those components are successfully verified and the encrypted drive is located in the original computer. Integrity checking requires a compatible Trusted Platform Module (TPM).

BitLocker-NetworkUnlock

BitLocker Network Unlock

BitLocker Network Unlock enables a network-based key protector to be used to automatically unlock BitLocker-protected operating system drives in domain-joined computers when the computer is restarted. This is beneficial if you are doing maintenance operations on computers during non-working hours that require the computer to be restarted to complete the operation.

BranchCache

BranchCache

<a href="http://go.microsoft.com/fwlink/?LinkId=244672">BranchCache</a> installs the services required to configure this computer as either a hosted cache server or a BranchCache-enabled content server. If you are deploying a content server, it must also be configured as either a Hypertext Transfer Protocol (HTTP) web server or a Background Intelligent Transfer Service (BITS)-based application server. To deploy a BranchCache-enabled file server, use the Add Roles Wizard to install the File Services server role with the File Server and BranchCache for network files role services.

Canary-Network-Diagnostics

Canary Network Diagnostics

Canary network diagnostics enables validation of the physical network.

NFS-Client

Client for NFS

Client for NFS enables this computer to access files on UNIX-based NFS servers. When installed, you can configure a computer to connect to UNIX NFS shares that allow anonymous access.

Data-Center-Bridging

Data Center Bridging

Data Center Bridging (DCB) is a suite of IEEE standards that are used to enhance Ethernet local area networks by providing hardware-based bandwidth guarantees and transport reliability. Use DCB to help enforce bandwidth allocation on a Converged Network Adapter for offloaded storage traffic such as Internet Small Computer System Interface, RDMA over Converged Ethernet, and Fibre Channel over Ethernet.

Direct-Play

Direct Play

Direct Play component.

EnhancedStorage

Enhanced Storage

Enhanced Storage enables support for accessing additional functions made available by Enhanced Storage devices. Enhanced Storage devices have built-in safety features that let you control who can access the data on the device.

Failover-Clustering

Failover Clustering

Failover Clustering allows multiple servers to work together to provide high availability of server roles. Failover Clustering is often used for File Services, virtual machines, database applications, and mail applications.

GPMC

Group Policy Management

Group Policy Management is a scriptable Microsoft Management Console (MMC) snap-in, providing a single administrative tool for managing Group Policy across the enterprise. Group Policy Management is the standard tool for managing Group Policy.

HostGuardian

Host Guardian Hyper-V Support

Host Guardian provides the features necessary on a Hyper-V server to provision Shielded Virtual Machines.

Web-WHC

IIS Hostable Web Core

IIS Hostable Web Core enables you to write custom code that will host core IIS functionality in your own application. HWC enables your application to serve HTTP requests and use its own applicationHost.config and root web.config configuration files. The HWC application extension is contained in the hwebcore.dll file.

InkAndHandwritingServices

Ink and Handwriting Services

Ink and Handwriting Services includes Ink Support and Handwriting Recognition. Ink Support provides pen/stylus support, including pen flicks support and APIs for calling handwriting recognition. Handwriting Recognition provides handwriting recognizers for a number of languages. After you install it, these components can be called by an application through the handwriting recognition APIs.

Internet-Print-Client

Internet Printing Client

Internet Printing Client enables clients to use Internet Printing Protocol (IPP) to connect and print to printers on the network or Internet.

IPAM

IP Address Management (IPAM) Server

IP Address Management (IPAM) Server provides a central framework for managing your IP address space and corresponding infrastructure servers such as DHCP and DNS. IPAM supports automated discovery of infrastructure servers in an Active Directory forest. IPAM allows you to manage your dynamic and static IPv4 and IPv6 address space, tracks IP address utilization trends, and supports monitoring and management of DNS and DHCP services on your network.

ISNS

iSNS Server service

Internet Storage Name Server (iSNS) provides discovery services for Internet Small Computer System Interface (iSCSI) storage area networks. iSNS processes registration requests, deregistration requests, and queries from iSNS clients.

Isolated-UserMode

Isolated User Mode

Isolated User Mode enables Virtualization Based Security on the system.

LPR-Port-Monitor

LPR Port Monitor

Line Printer Remote (LPR) Port Monitor enables the computer to print to printers that are shared using any Line Printer Daemon (LPD) service. (LPD service is commonly used by UNIX-based computers and printer-sharing devices.)

ManagementOdata

Management OData IIS Extension

Management OData IIS Extension is a framework for easily exposing Windows PowerShell cmdlets through an ODATA-based web service running under IIS. After enabling this feature, the user must provide a schema file (which contains definitions of the resources to be exposed) and an implementation of callback interfaces to make the web service functional.

Server-Media-Foundation

Media Foundation

Media Foundation, which includes Windows Media Foundation, the Windows Media Format SDK, and a server subset of DirectShow, provides the infrastructure required for applications and services to transcode, analyze, and generate thumbnails for media files. Media Foundation is required by the Desktop Experience.

MSMQ

Message Queuing

Message Queuing provides guaranteed message delivery, efficient routing, security, and priority-based messaging between applications. Message Queuing also accommodates message delivery between applications that run on different operating systems, use dissimilar network infrastructures, are temporarily offline, or that are running at different times.

MSMQ-Services

Message Queuing Services

Message Queuing Services provides guaranteed message delivery, efficient routing, security, and priority-based messaging between applications. Message Queuing also accommodates message delivery between applications that run on different operating systems, use dissimilar network infrastructures, are temporarily offline, or that are running at different times.

MSMQ-Server

Message Queuing Server

Message Queuing Server provides guaranteed message delivery, efficient routing, security, and priority-based messaging. It can be used to implement solutions for both asynchronous and synchronous messaging scenarios.

MSMQ-Directory

Directory Service Integration

Directory Service Integration enables publishing of queue properties to the directory, authentication and encryption of messages using certificates registered in the directory, and routing of messages across directory sites.

MSMQ-HTTP-Support

HTTP Support

HTTP Support enables the sending of messages over HTTP.

MSMQ-Triggers

Message Queuing Triggers

Message Queuing Triggers enables the invocation of a COM component or an executable depending on the filters that you define for the incoming messages in a given queue.

MSMQ-Multicasting

Multicasting Support

Multicasting Support enables queuing and sending of multicast messages to a multicast IP address.

MSMQ-Routing

Routing Service

Routing Service routes messages between different sites and within a site.

MSMQ-DCOM

Message Queuing DCOM Proxy

Message Queuing DCOM Proxy enables this computer to act as a DCOM client of a remote Message Queuing server.

Multipath-IO

Multipath I/O

Multipath I/O, along with the Microsoft Device Specific Module (DSM) or a third-party DSM, provides support for using multiple data paths to a storage device on Windows.

MultiPoint-Connector-Feature

MultiPoint Connector

MultiPoint Connector enables your machine to be monitored and managed by the MultiPoint Manager and Dashboard apps.

NLB

Network Load Balancing

Network Load Balancing (NLB) distributes traffic across several servers, using the TCP/IP networking protocol. NLB is particularly useful for ensuring that stateless applications, such as Web servers running Internet Information Services (IIS), are scalable by adding additional servers as the load increases.

PNRP

Peer Name Resolution Protocol

Peer Name Resolution Protocol allows applications to register and resolve names on your computer so that other computers can communicate with these applications.

qWave

Quality Windows Audio Video Experience

Quality Windows Audio Video Experience (qWave) is a networking platform for audio video (AV) streaming applications on IP home networks. qWave enhances AV streaming performance and reliability by ensuring network quality-of-service (QoS) for AV applications. It provides mechanisms for admission control, run time monitoring and enforcement, application feedback, and traffic prioritization. On Windows Server platforms, qWave provides only rate-of-flow and prioritization services.

CMAK

RAS Connection Manager Administration Kit (CMAK)

Create profiles for connecting to remote servers and networks.

Remote-Assistance

Remote Assistance

Remote Assistance enables you (or a support person) to help users with PC issues or questions. You can view and get control of the user's desktop to troubleshoot and fix problems. Users can also ask for help from friends or co-workers.

RDC

Remote Differential Compression

Remote Differential Compression computes and transfers the differences between two objects over a network using minimal bandwidth.

RSAT

Remote Server Administration Tools

Remote Server Administration Tools includes snap-ins and command-line tools for remotely managing roles and features.

RSAT-Feature-Tools

Feature Administration Tools

Feature Administration Tools includes snap-ins and command-line tools for remotely managing features.

RSAT-SMTP

SMTP Server Tools

 

RSAT-Feature-Tools-BitLocker

BitLocker Drive Encryption Administration Utilities

BitLocker Drive Encryption Administration Utilities include snap-ins and command-line tools for managing BitLocker Drive Encryption features.

RSAT-Feature-Tools-BitLocker-RemoteAdminTool

BitLocker Drive Encryption Tools

BitLocker Drive Encryption Tools include the command line tools manage-bde and repair-bde and the BitLocker cmdlets for Windows PowerShell.

RSAT-Feature-Tools-BitLocker-BdeAducExt

BitLocker Recovery Password Viewer

BitLocker Recovery Password Viewer helps locate BitLocker Drive Encryption recovery passwords for Windows-based computers in Active Directory Domain Services (AD DS).

RSAT-Bits-Server

BITS Server Extensions Tools

BITS Server Extensions Tools includes the Internet Information Services (IIS) 6.0 Manager and IIS Manager snap-ins.

RSAT-Clustering

Failover Clustering Tools

Failover Clustering Tools include the Failover Cluster Manager snap-in, the Cluster-Aware Updating interface, and the Failover Cluster module for Windows PowerShell. Additional tools are the Failover Cluster Automation Server and the Failover Cluster Command Interface.

RSAT-Clustering-Mgmt

Failover Cluster Management Tools

Failover Cluster Management Tools include the Failover Cluster Manager snap-in and the Cluster-Aware Updating interface.

RSAT-Clustering-PowerShell

Failover Cluster Module for Windows PowerShell

The Failover Cluster Module for Windows PowerShell includes Windows PowerShell cmdlets for managing failover clusters. It also includes the Cluster-Aware Updating module for Windows PowerShell, for installing software updates on failover clusters.

RSAT-Clustering-AutomationServer

Failover Cluster Automation Server

Failover Cluster Automation Server is the deprecated Component Object Model (COM) programmatic interface, MSClus.

RSAT-Clustering-CmdInterface

Failover Cluster Command Interface

Failover Cluster Command Interface is the deprecated cluster.exe command-line tool for Failover Clustering. This tool has been replaced by the Failover Clustering module for Windows PowerShell.

IPAM-Client-Feature

IP Address Management (IPAM) Client

IP Address Management (IPAM) Client is used to connect to and manage a local or remote IPAM server. IPAM provides a central framework for managing IP address space and corresponding infrastructure servers such as DHCP and DNS in an Active Directory forest.

RSAT-NLB

Network Load Balancing Tools

Network Load Balancing Tools includes the Network Load Balancing Manager snap-in, the Network Load Balancing module for Windows PowerShell, and the nlb.exe and wlbs.exe command-line tools.

RSAT-Shielded-VM-Tools

Shielded VM Tools

Shielded VM Tools includes the Provisioning Data File Wizard and the Template Disk Wizard.

RSAT-SNMP

SNMP Tools

Simple Network Management Protocol (SNMP) Tools includes tools for managing SNMP.

RSAT-Storage-Replica

Storage Replica Management Tools

Storage Replica Management Tools includes the CIM provider and the Storage Replica module for Windows PowerShell.

RSAT-WINS

WINS Server Tools

WINS Server Tools includes the WINS Manager snap-in and command-line tool for managing the WINS Server.

RSAT-Role-Tools

Role Administration Tools

Role Administration Tools includes snap-ins and command-line tools for remotely managing roles.

RSAT-AD-Tools

AD DS and AD LDS Tools

Active Directory Domain Services (AD DS) and Active Directory Lightweight Directory Services (AD LDS) Tools includes snap-ins and command-line tools for remotely managing AD DS and AD LDS.

RSAT-AD-PowerShell

Active Directory module for Windows PowerShell

The Active Directory module for Windows PowerShel and the tools it provides can be used by Active Directory administrators to manage Active Directory Domain Services (AD DS) at the command line.

RSAT-ADDS

AD DS Tools

Active Directory Domain Services (AD DS) Tools includes snap-ins and command-line tools for remotely managing AD DS.

RSAT-AD-AdminCenter

Active Directory Administrative Center

Active Directory Administrative Center provides users and network administrators with an enhanced Active Directory data management experience and a rich graphical user interface (GUI) to perform common Active Directory object management tasks.

RSAT-ADDS-Tools

AD DS Snap-Ins and Command-Line Tools

Active Directory Domain Services Snap-Ins and Command-Line Tools includes Active Directory Users and Computers, Active Directory Domains and Trusts, Active Directory Sites and Services, and other snap-ins and command-line tools for remotely managing Active Directory domain controllers.

RSAT-ADLDS

AD LDS Snap-Ins and Command-Line Tools

Active Directory Lightweight Directory Services (AD LDS) Snap-Ins and Command-Line Tools includes Active Directory Sites and Services, ADSI Edit, Schema Manager, and other snap-ins and command-line tools for managing AD LDS.

RSAT-Hyper-V-Tools

Hyper-V Management Tools

Hyper-V Management Tools includes GUI and command-line tools for managing Hyper-V.

Hyper-V-Tools

Hyper-V GUI Management Tools

Hyper-V GUI Management Tools includes the Hyper-V Manager snap-in and Virtual Machine Connection tool.

Hyper-V-PowerShell

Hyper-V Module for Windows PowerShell

Hyper-V Module for Windows PowerShell includes Windows PowerShell cmdlets for managing Hyper-V.

RSAT-RDS-Tools

Remote Desktop Services Tools

Remote Desktop Services Tools includes the snap-ins for managing Remote Desktop Services.

RSAT-RDS-Gateway

Remote Desktop Gateway Tools

Remote Desktop Gateway Tools helps you manage and monitor RD Gateway server status and events. By using Remote Desktop Gateway Manager, you can specify events (such as unsuccessful connection attempts to the RD Gateway server) that you want to monitor for auditing purposes.

RSAT-RDS-Licensing-Diagnosis-UI

Remote Desktop Licensing Diagnoser Tools

Remote Desktop Licensing Diagnoser Tools helps you determine which license servers the RD Session Host server or RD Virtualization Host server is configured to use, and whether those license servers have licenses available to issue to users or computing devices that are connecting to the servers.

RDS-Licensing-UI

Remote Desktop Licensing Tools

Remote Desktop Licensing Tools helps you manage the licenses required to connect to a Remote Desktop Session Host server or a virtual desktop. You can use RD Licensing to install, issue, and track the availability of licenses.

UpdateServices-RSAT

Windows Server Update Services Tools

Windows Server Update Services Tools includes graphical and Powershell tools for managing WSUS.

UpdateServices-API

API and PowerShell cmdlets

Installs the .NET API and PowerShell cmdlets for remote management, automated task creation, and managing WSUS from the command line.

UpdateServices-UI

User Interface Management Console

Installs the WSUS Management Console user interface (UI).

RSAT-ADCS

Active Directory Certificate Services Tools

Active Directory Certificate Services Tools includes the Certification Authority, Certificate Templates, Enterprise PKI, and Online Responder Management snap-ins.

RSAT-ADCS-Mgmt

Certification Authority Management Tools

Active Directory Certification Authority Management Tools includes the Certification Authority, Certificate Templates, and Enterprise PKI snap-ins.

RSAT-Online-Responder

Online Responder Tools

Online Responder Tools includes the Online Responder Management snap-in.

RSAT-ADRMS

Active Directory Rights Management Services Tools

Active Directory Rights Management Services Tools includes the Active Directory Rights Management Services snap-in.

RSAT-DHCP

DHCP Server Tools

DHCP Server Tools includes the DHCP MMC snap-in, DHCP server netsh context and Windows PowerShell module for DHCP Server.

RSAT-DNS-Server

DNS Server Tools

DNS Server Tools includes the DNS Manager snap-in, dnscmd.exe command-line tool and Windows PowerShell module for DNS Server.

RSAT-Fax

Fax Server Tools

Fax Server Tools includes the Fax Service Manager snap-in.

RSAT-File-Services

File Services Tools

File Services Tools includes snap-ins and command-line tools for remotely managing File Services.

RSAT-DFS-Mgmt-Con

DFS Management Tools

Includes the DFS Management snap-in, DFS Replication service, DFS Namespaces PowerShell commands, and the dfsutil, dfscmd, dfsdiag, dfsradmin, and dfsrdiag commands.

RSAT-FSRM-Mgmt

File Server Resource Manager Tools

Includes the File Server Resource Manager snap-in and the dirquota, filescrn, and storrept commands.

RSAT-NFS-Admin

Services for Network File System Management Tools

Includes the Network File System snap-in and the nfsadmin, showmount, and rpcinfo commands.

RSAT-CoreFile-Mgmt

Share and Storage Management Tool

Includes the Share and Storage Management snap-in, which lets you create and modify network shares and manage the physical disks on a server.

RSAT-NetworkController

Network Controller Management Tools

Network Controller Management Tools includes Powershell tools for managing the Network Controller Role

RSAT-NPAS

Network Policy and Access Services Tools

Network Policy and Access Services Tools includes the Network Policy Server snap-in.

RSAT-Print-Services

Print and Document Services Tools

Print and Document Services Tools includes the Print Management snap-in.

RSAT-RemoteAccess

Remote Access Management Tools

Remote Access Management Tools includes graphical and PowerShell tools for managing Remote Access.

RSAT-RemoteAccess-Mgmt

Remote Access GUI and Command-Line Tools

Includes the Remote Access GUI and Command-Line Tools. Remote Access administrators can use the tools to manage Remote Access.

RSAT-RemoteAccess-PowerShell

Remote Access module for Windows PowerShell

Includes the Remote Access provider and cmdlets. Remote Access administrators can use the Windows PowerShell environment and the tools it provides to manage Remote Access at the command line.

RSAT-VA-Tools

Volume Activation Tools

Volume Activation Tools console can be used to manage volume activation license keys on a Key Management Service (KMS) host or in Active Directory Domain Services. You can use the Volume Activation Tools to install, activate, and manage one or more volume activation license keys, and to configure KMS settings.

WDS-AdminPack

Windows Deployment Services Tools

<a href="http://go.microsoft.com/fwlink/?LinkId=294848">Windows Deployment Services Tools</a> includes the Windows Deployment Services snap-in, wdsutil.exe command-line tool, and Remote Install extension for the Active Directory Users and Computers snap-in.

RSAT-HostGuardianService

Windows Server Host Guardian Service Tools

Windows Server Host Guardian Service Tools include the Remote Attestation Service module and the Key Protection Service module for Windows PowerShell.

RPC-over-HTTP-Proxy

RPC over HTTP Proxy

Remote Procedure Call (RPC) over HTTP Proxy relays RPC traffic from client applications over HTTP to the server as an alternative to clients accessing the server over a VPN connection.

Setup-and-Boot-Event-Collection

Setup and Boot Event Collection

This feature enables the collection and logging of setup and boot events from other computers on this network.

Simple-TCPIP

Simple TCP/IP Services

Simple TCP/IP Services supports the following TCP/IP services: Character Generator, Daytime, Discard, Echo and Quote of the Day. Simple TCP/IP Services is provided for backward compatibility and should not be installed unless it is required.

FS-SMB1

SMB 1.0/CIFS File Sharing Support

Support for the SMB 1.0/CIFS file sharing protocol, and the Computer Browser protocol.

FS-SMBBW

SMB Bandwidth Limit

SMB Bandwidth Limit provides a mechanism to track SMB traffic per category (Default, Hyper-V or Live Migration) and allows you to limit the amount of traffic allowed for a given category. It is commonly used to limit the bandwidth used by Live Migration over SMB.

SMTP-Server

SMTP Server

 

SNMP-Service

SNMP Service

Simple Network Management Protocol (SNMP) Service includes agents that monitor the activity in network devices and report to the network console workstation.

SNMP-WMI-Provider

SNMP WMI Provider

SNMP Windows Management Instrumentation (WMI) Provider enables WMI client scripts and applications to access SNMP information. Clients can use WMI C++ interfaces and scripting objects to communicate with network devices that use the SNMP protocol and can receive SNMP traps as WMI events.

Storage-Replica

Storage Replica

Allows you to replicate data using the Storage Replica feature.

Telnet-Client

Telnet Client

Telnet Client uses the Telnet protocol to connect to a remote Telnet server and run applications on that server.

TFTP-Client

TFTP Client

Trivial File Transfer Protocol (TFTP) Client is used to read files from, or write files to, a remote TFTP server. TFTP is primarily used by embedded devices or systems that retrieve firmware, configuration information, or a system image during the boot process from a TFTP server.

User-Interfaces-Infra

User Interfaces and Infrastructure

This contains the available User Experience and Infrastructure options.

Server-Gui-Mgmt-Infra

Graphical Management Tools and Infrastructure

Graphical Management Tools and Infrastructure includes infrastructure and a minimal server interface that supports GUI management tools.

Desktop-Experience

Desktop Experience

Desktop Experience includes features of Windows 8.1, including Windows Search. Windows Search lets you search your device and the Internet from one place. To learn more about Desktop Experience, including how to disable web results from Windows Search, read http://go.microsoft.com/fwlink/?LinkId=390729

Server-Gui-Shell

Server Graphical Shell

Server Graphical Shell provides the full Windows graphical user interface for server, including File Explorer and Internet Explorer. Uninstalling the shell reduces the servicing footprint of the installation, while leaving the ability to run local GUI management tools, as part of the minimal server interface.

FabricShieldedTools

VM Shielding Tools for Fabric Management

Provides Shielded VM utilities that are used by Fabric Management solutions and should be installed on the Fabric Management server.

Biometric-Framework

Windows Biometric Framework

Windows Biometric Framework (WBF) allows fingerprint devices to be used to identify and verify identities and to sign in to Windows. WBF includes the components required to enable the use of fingerprint devices.

Windows-Identity-Foundation

Windows Identity Foundation 3.5

Windows Identity Foundation (WIF) 3.5 is a set of .NET Framework classes that can be used for implementing claims-based identity in your .NET 3.5 and 4.0 applications. WIF 3.5 has been superseded by WIF classes that are provided as part of .NET 4.5. It is recommended that you use .NET 4.5 for supporting claims-based identity in your applications.

Windows-Internal-Database

Windows Internal Database

Windows Internal Database is a relational data store that can be used only by Windows roles and features, such as Active Directory Rights Management Services, Windows Server Update Services, and Windows System Resource Manager.

PowerShellRoot

Windows PowerShell

Windows PowerShell enables you to automate local and remote Windows administration. This task-based command-line shell and scripting language is built on the Microsoft .NET Framework. It includes hundreds of built-in commands and lets you write and distribute your own commands and scripts.

PowerShell

Windows PowerShell 5.0

Windows PowerShell enables you to automate local and remote Windows administration. This task-based command-line shell and scripting language is built on the Microsoft .NET Framework. It includes hundreds of built-in commands and lets you write and distribute your own commands and scripts.

PowerShell-V2

Windows PowerShell 2.0 Engine

Windows PowerShell 2.0 Engine includes the core components from Windows PowerShell 2.0 for backward compatibility with existing Windows PowerShell host applications.

DSC-Service

Windows PowerShell Desired State Configuration Service

Windows PowerShell Desired State Configuration Service supports configuration management of multiple nodes from a single repository.

PowerShell-ISE

Windows PowerShell ISE

Windows PowerShell Integrated Scripting Environment (ISE) lets you compose, edit, and debug scripts and run multi-line interactive commands in a graphical environment. Features include IntelliSense, tab completion, snippets, color-coded syntax, line numbering, selective execution, graphical debugging, right-to-left language and Unicode support.

WindowsPowerShellWebAccess

Windows PowerShell Web Access

Windows PowerShell Web Access lets a server act as a web gateway, through which an organization's users can manage remote computers by running Windows PowerShell sessions in a web browser. After Windows PowerShell Web Access is installed, an administrator completes the gateway configuration in the Web Server (IIS) management console.

WAS

Windows Process Activation Service

Windows Process Activation Service generalizes the IIS process model, removing the dependency on HTTP. All the features of IIS that were previously available only to HTTP applications are now available to applications hosting Windows Communication Foundation (WCF) services, using non-HTTP protocols. IIS 10.0 also uses Windows Process Activation Service for message-based activation over HTTP.

WAS-Process-Model

Process Model

Process Model hosts Web and WCF services. Introduced with IIS 6.0, the process model is a new architecture that features rapid failure protection, health monitoring, and recycling. Windows Process Activation Service Process Model removes the dependency on HTTP.

WAS-NET-Environment

.NET Environment 3.5

.NET Environment supports managed code activation in the process model.

WAS-Config-APIs

Configuration APIs

Configuration APIs enable applications that are built using the .NET Framework to configure Windows Process Activation Model programmatically. This lets the application developer automatically configure Windows Process Activation Model settings when the application runs instead of requiring the administrator to manually configure these settings.

Search-Service

Windows Search Service

Windows Search Service provides fast file searches on a server from clients that are compatible with Windows Search Service. Windows Search Service is intended for desktop search or small file server scenarios, and not for enterprise scenarios.

Windows-Server-Antimalware-Features

Windows Server Antimalware Features

Windows Server Antimalware helps protect your machine from malware.

Windows-Server-Antimalware

Windows Server Antimalware

Windows Server Antimalware helps protect your machine from malware.

Windows-Server-Antimalware-Gui

GUI for Windows Server Antimalware

GUI for Windows Server Antimalware.

Windows-Server-Backup

Windows Server Backup

Windows Server Backup allows you to back up and recover your operating system, applications and data. You can schedule backups, and protect the entire server or specific volumes.

Migration

Windows Server Migration Tools

Windows Server Migration Tools includes Windows PowerShell cmdlets that facilitate migration of server roles, operating system settings, files, and shares from computers that are running earlier versions of Windows Server or Windows Server 2012 to computers that are running Windows Server 2012.

WindowsStorageManagementService

Windows Standards-Based Storage Management

Windows Standards-Based Storage Management provides the ability to discover, manage, and monitor storage devices using management interfaces that conform to the SMI-S standard. This functionality is exposed as a set of Windows Management Instrumentation (WMI) classes and Windows PowerShell cmdlets.

Windows-TIFF-IFilter

Windows TIFF IFilter

Windows TIFF IFilter (Tagged Image File Format Index Filter) performs OCR (Optical Character Recognition) on TIFF 6.0-compliant files (.TIF and .TIFF extensions) and in that way enables indexing and full text search of those files.

WinRM-IIS-Ext

WinRM IIS Extension

Windows Remote Management (WinRM) IIS Extension enables a server to receive a management request from a client by using WS-Management. WinRM is the Microsoft implementation of the WS-Management protocol which provides a secure way to communicate with local and remote computers by using Web services.

WINS

WINS Server

Windows Internet Naming Service (WINS) Server provides a distributed database for registering and querying dynamic mappings of NetBIOS names for computers and groups used on your network. WINS maps NetBIOS names to IP addresses and solves the problems arising from NetBIOS name resolution in routed environments.

Wireless-Networking

Wireless LAN Service

Wireless LAN (WLAN) Service configures and starts the WLAN AutoConfig service, regardless of whether the computer has any wireless adapters. WLAN AutoConfig enumerates wireless adapters, and manages both wireless connections and the wireless profiles that contain the settings required to configure a wireless client to connect to a wireless network.

WoW64-Support

WoW64 Support

Includes all of WoW64 to support running 32-bit applications on Server Core installations. This feature is required for full Server installations. Uninstalling WoW64 Support will convert a full Server installation into a Server Core installation.

XPS-Viewer

XPS Viewer

The XPS Viewer is used to read, set permissions for, and digitally sign XPS documents.

 

Free Microsoft Azure account for Students

$
0
0

Originally posted on: http://geekswithblogs.net/Jialiang/archive/2015/07/01/free-microsoft-azure-account-for-students.aspx

Good news for global students!  If you are a student (current active student having a .edu email account), you can get a FREE Azure cloud computing account from Microsoft today!   The free account includes:

  • Free Azure Web App Service
  • Free MySQL Database from ClearDB
  • Free App Insights
  • Free VS Online Service

With these, you can build and host a web site using MySQL (e.g. host a WordPress web site) with NO cost!

image

For more details visit http://scottge.net/2015/06/30/get-free-microsoft-azure-account-if-you-are-a-student/.

Free Newsletter Email Template for Reporting Project Status

Running Your First Code Camp

$
0
0

Originally posted on: http://geekswithblogs.net/dlussier/archive/2015/07/01/165460.aspx

Every now and then I get people asking me about how to run a conference. One thing I encourage is that people start small and build from there. I ran the Winnipeg Code Camp for a number of years before evolving it into Prairie Dev Con, and the foundation of the code camp is the base that Prairie Dev Con grew out of.

So below are my thoughts on how to run a one day, multi-track, Code Camp.

What’s a Code Camp

Code Camps became popular in the 00’s. They were free one day technology conferences that focused on showing off technology (so more code, less marketechture). This is the true volunteer event – low budget, high volunteerism, but still high quality and lots of fun. All costs are covered by sponsors and there’s never an entry fee for attendees.

Ok, so let’s start planning our Code Camp!

The Code Camp Format

For a first-time code camp, I would suggest doing a single day event on a Saturday, running two or three tracks of sessions (this will be based on your market size and speaker pool). Schedule will look like this:

8:30 – 9:30 Breakfast/Registration
9:30 – 9:45 Welcome
9:45 – 10:45 Session
11:00 – 12:00 Session
12:00 – 1:00 Lunch
1:00 – 2:00 Session
2:15 – 3:15 Session
3:30 – 4:30 Session
4:30 – 5:00 Wrap-Up

That gives you 10 sessions for a two track or 15 sessions for a three track setup.

Step 1 – Gauge Interest

A big part of a Code Camp’s success is the energy and commitment that an organizer brings to it, but you also need to know if your community shares your vision and will support the event. Reaching out to local technology user groups to see if their organizers and members share the same excitement is your starting point. It also makes it easier to promote the event if you can get leaders of those communities on board.

Now realize you’re just gauging interest here, not commitment. I ran a PrDC where community leaders – who were all very well meaning – said I’d get well over 300 attendees out; I struggled to get 180. The reality is that until you run your first event you won’t have an idea of how many people you’ll actually get out, so early on you’re just gauging interest and not looking for commitment.

You also need to gauge interest with the people who will be speaking at the Code Camp. We’ll talk about doing a call-for-speakers later, but especially for an initial event you want to have a good number of speakers already lined up and committed to the event.

Step 2 – Source Venues

If there’s enough interest, now is the time to look at venues. For a Code Camp you want to do this on the cheap. Don’t even bother with hotels or conference centers. You’re first hit should be to local schools, colleges, and universities. Here’s why:

They’ll already have lecture halls and classrooms set up with projectors (verify that projectors are included in room rental prices)

They’re usually cheaper to rent rooms space from then hotels/conference centers

No classes happen on Saturdays (typically) so space will be more generally available

They *can* be located on more convenient public transportation routes (buses, subways, etc.) so easier to get to for people

If you have any contacts with the administration, you may be able to pitch this as a good community event that students will benefit from and get a discount on the rooms.

When sourcing venues keep in mind that you need n+1 rooms, where n is the number of tracks you’re running. The +1 is for your plenary room – the place that all attendees will meet for meals, for the welcome/kickoff in the morning, and for the wrapup at the end of the day. All rooms should be close together so attendees aren’t required to go walking all over the place.

Make sure that you ask about parking – if its free, if there’s paid lots nearby and what the costs are, and what the street parking rules are. You’ll want to communicate this to your attendees.

Also ask about internet access – is there public wi-fi, is there a charge, is a passcode required, etc. This information will be important to provide to speakers as well if any require internet access for their planned talks.

Finally, while this shouldn’t be an issue with building codes and current laws, make sure that the venue you select is easily accessible for people with disabilities.

Step 2.1 – Food

For your first code camp food is definitely optional – although if you decide to not do food you should try to ensure that there’s enough restaurant options and coffee shops for attendees nearby the venue.

If you do decide to do food check what your venue’s policies are. Most venues will have a policy that you must use their own food services and can’t bring from outside sources; and that also means that price will be higher because venue-based food is almost always more expensive (always in my experience). This is where gauging interest is important because since you aren’t charging a fee there’s no way to know how many people will show up. If you charged a fee, even if they no-show you could still cover their food, but in this case its all about estimation.

You could run your first code camp, gauge attendees, and use that to base future events off of and incorporate food later. Or if you’re confident in your estimations then figure out a reasonable menu. Code camps are the *only* event where I think continental breakfasts are ok. Also look at sandwiches, pizza, or chicken fingers & salad for lunch – typically on the lower side of cost and generally liked by most people. Do take into account people’s dietary needs (allergies, cultural preferences, etc.). I avoid any food option based on pork or seafood and stick to chicken or beef from a meat point of view. You could have vegetarian and vegan folks as well. Just make sure when you review the available menu that there are options in case you need alternate meals.

Step 3 – Who’s Running This?

Now that you have your venue, a date, and food cost, it’s time to start approaching sponsors. But first, this is when you should tighten up your leadership organization if you haven’t already. These types of events get run by very well meaning individuals who want to improve their communities, but they’re also run by people who aren’t perfect and stuff can happen (I blogged about a community event gone bad here). So let’s talk about how you can organize this.

For the Winnipeg Code Camp although I’d organize it we’d use the Winnipeg .NET User Group as a neutral and already set up organization to run finances, sponsorship, and communication through. Finances because a community group account was already set up and the guy who handles finances was willing to be the pass through for everything. It’s also good to have a neutral body be the host for sponsorship – some companies don’t want the perception that they’re partnering with competition to put on an event, but they’ll definitely sponsor a neutral organization’s event (so they’re sponsoring the .NET User Group’s Code Camp, not putting one on with competitors).

Here are options:

Leverage an existing organization to act as the “host”. A technology user group is ideal for this, however note that whoever is seen as hosting is also owning liability for the event as well. We’ll talk about that in a second.

Write up an organizer’s agreement stating who is responsible for what and who is accepting liability for the event. Yes this sounds scary, but its a necessity. The reality is that putting on any type of event, free or not, holds some level of risk that needs to be mitigated. This also protects all organizers.

Create a corporation to run your events out of. This is probably extreme though, but its what I ended up doing for my conferences. For code camp events you probably don’t want to go through this rigor but it does dot ALL the i’s and cross ALL the t’s. It’s also costly and time consuming.

A note on liability – this is always a consideration when running an event. Even if you’re the nicest person in the world putting on a free community event for the betterment of the community, if someone eats bad food or trips and breaks their leg you could still be named in a lawsuit. Event insurance is very inexpensive. For Prairie Dev Con I pay $300 - $500 which covers me for food-borne illness and any type of injury that could occur while at the conference. The insurance will need to be made by a person or entity though, and unless you’re running this under a structured legal entity then somebody may be the one to *own* the liability coverage (and any liability).

Step 3 – Sponsors

Now that you have venue costs and an estimate on food costs, you can approach sponsors – which is how code camps are typically funded. You should create a package you can present to sponsors:

What is the event and what is the vision for it, what are the goals.

Who is involved and who will the attendees be.

What is the ask of the sponsor.

What will they get in return (logo recognition on marketing, website, opportunity to do a presentation, etc.)

I would *not* put a limit on the number of sponsorships you have available. Have a number that you need to get to in mind to cover your costs, but if you get more sponsors that’s ok – you can spend the money for prizes or extra perks at the code camp. Just have the mindset that you want to spend ALL of the money on the event – there’s rules/tax implications for volunteer groups who carry money forward and I don’t know them all (I have an incorporation now so I just run stuff through that).

Step 4 – Website and Ticketing

You may want to get a website set up before you approach Sponsors, just so they can see that there’s an online presence and the event is legit – or you may not if you have personal connections to those you’re looking to get sponsorship from Regardless, you should get some website up as early as possible once you have a good idea on whether the event will be a go or not.

You will also need some way for attendees to register. There’s lots of ticketing/event-registration sites out there, my fav is Picatic. Your registration method needs to be more than just collecting a name/email and tracking registration numbers. There’s a few key pieces of info you need to ask.

Food Allergies/Preferences – Does the person have any food allergies? Are there any preferences you need to be aware of (personal preference, religious/cultural, etc.)?

Emergency Contact Info – In the event something happens, who should be contacted.

Consent to Media – Are you planning on taking pictures and posting it to social media? You may want to get their permission to use them in pictures online. At one conference I had an attendee ask that she not be included in any event pictures because of fears about an ex-boyfriend who was looking for her.

Most good ticketing sites will let you add custom questions to the process, which is the easiest way to collect this information. (Note – one way to handle the “can I take your picture or not” practically is to provide a slightly different name badge (colour, ribbon, etc.) to identify those who wish to not be photographed).

Step 5 – Speakers

You can approach speakers and sponsors at the same time, but I put it here in the order because you need to have sponsors lined up first to ensure you can cover venue costs and book the venue. Ideally you start gauging speaker interest early and continue looking throughout the organization process so you can have some people/sessions ready to post on the website when it goes live.

For code camps, speakers tend to be locally sourced and while outside speakers can definitely be invited a code camp usually doesn’t have the dollars to pay for travel or hotel. I’ve known many speakers (and have done this myself as a speaker) who will pay their own way for a code camp, but I see code camps as a great opportunity to groom new and upcoming speakers and give them a stage to help improve their presentation skills. The point of a code camp is to learn from each other, not come out and see big-name speakers.

In fact for my code camps I wouldn’t vette out speaker submissions – I ran it as a first come/first served basis. I would however discuss with a speaker if they submitted a duplicate talk as someone else, or if we had an imbalance of sessions (i.e. you don’t need 5 talks on Intro to ASP.NET) and work it that way. I would also offer guidance and coaching for those that are new to speaking.

Call for speakers can be as simple as providing the info to various community leaders to spread through their membership groups, leveraging social media, posting to services like SpeakNet, and also asking sponsors if they have anyone in their organizations who are interested (but not for a sales/marketing point of view – has to be about code/technology). Of course you should have a method for people to submit their session submissions that can be shared on your event website and social media. I use Survey Monkey for this – you can build a “survey” that captures speaker info and their session details for free.

Step 6 – Promotion

Time to promote the event! Social media helps out a lot here, but there’s still some things that will require in-person contact of some sort. Working personal connections to get the word out works wonders, especially if its seen as a low cost, community driven, learning event open to everyone (like a code camp typically is).

Also look for non-traditional local industry groups to help get the word out – groups that deal with management/decision-maker level folks in IT (i.e. ICTAM) or Chambers of Commerce. Newspapers will sometimes allow events to be posted for free in their business sections under upcoming events.

Speaking of newspapers, definitely reach out to local media at print, radio, and TV either to get visibility before the event or at the event.

Step 7 – Prep for The Event

The event is coming up and its time to start prepping for it! Whether a low-budget code camp or a for-pay single day event, here are some tips for getting ready!

One rule of thumb I’ve learnt is that you need to invest money into the areas of your event that provide the most value. Especially for a code camp, you shouldn’t worry too much about all the “nice-to-haves”; you don’t need t-shirts or fancy badge holders or big banners. If, after you cover the basics, you still have budget left over then by all means look at adding some special things to the event. But don’t put them at the top of your list.

Nametags – Amazon is your friend here! You can get plastic name-tag holders and lanyards for much cheaper than paying retail at places like Staples. If you’re only doing a small amount to 200 nametags, you can print them off yourself with a home printer and nametag printer sheets (also available at Amazon).

Signage – Staples is actually a great place to get posters made. a 4x3 feet poster is about $35 here in Canada, which is pretty inexpensive.

Mobile App – I used Guidebook for Prairie Dev Con Regina 2015 and was very happy with it! It provides iOS and Android apps with full schedule and the ability to build your own schedule, as well as other features like showing venue maps and custom lists. And its FREE for events under 200!

Print Schedules – Event with the mobile app, many people still like to have a physical, paper schedule in hand. Black and white is fine unless you have extra funds to do colour. Make sure that whatever you print matches what’s on the website and mobile app

Confirm Venue Access and Review Schedule – Make sure that you confirm that you’ll have access to your venue space before your event is scheduled to start. You’ll at least need to be there a few hours before your event starts to do any setup (registration table, signage, etc.). Make sure that security for the venue is aware of your event and what time you’ll be getting access to the rooms. Confirm numbers and times for food. If you need to drop anything off the night before, know where the materials will be locked up and who will be able to get you access.

Communicate with Your Attendees – Make sure via email, social media, blog, etc. to remind attendees (and speakers) about venue, times, locations in the venue, and the schedule, and where to get more information. Don’t forget to include information about venue parking! One thing I strongly suggest is to have an Attendee FAQ area on your event website where you can post all this information and make it easy to refer people to it (just send them the URL). People are busy and providing a friendly reminder is definitely appreciated as they may have registered a while back and not had a chance to keep up with event announcements.

Social Media – Make sure you have all your social media accounts created and hashtags decided on.

Step 8 – Run the Event!

Time for the big day! Running a code camp is a lot of fun, but if its your first time it can seem daunting. No worries though, you’ll do great!

Remember the timing format we talked about earlier:

8:30 – 9:30 Breakfast/Registration
9:30 – 9:45 Welcome
9:45 – 10:45 Session
11:00 – 12:00 Session
12:00 – 1:00 Lunch
1:00 – 2:00 Session
2:15 – 3:15 Session
3:30 – 4:30 Session
4:30 – 5:00 Wrap-Up

Let’s break this down.

7:00 AM – 8:30 AM

Get to the venue early, at least an hour ahead of time based on how much pre-event prep you’ve completed. Make sure the venue has a table outside your main meeting room for registration. I usually put out nametags alphabetically on a table and let attendees pick them up, with one or two people available to help and re-organize the tags. If there’s any swag or materials (like schedule hand outs), have them available at the registration desk as well. If there’s any signage you want to post giving attendees directions get that up during this time.

8:30 AM – 9:30 AM

Direct attendees to where food is and the plenary room. In the room, have a laptop setup with a rolling PowerPoint with information like venue wi-fi (if available), link to session surveys (Survey Monkey is great for this as well), thanks to sponsors with sponsor logos displayed, and any other important information.

9:30 AM – 9:45 AM

This is where you welcome everyone to the event, introduce yourself and the organizers, thank the sponsors, and go over housekeeping things like reviewing the venue map (point out both rooms and things like where bathrooms are), reviewing the schedule, where attendees can submit session surveys, encouraging them to use social media and what accounts/hashtags to use, and how you’ll be drawing for prizes at the end of the day (if you are). Don’t forget to thank the speakers and the attendees – code camps need everyone to succeed and the effort people put forward, even just giving up a Saturday to attend, should be acknowledged.

9:45 AM – 12:00 PM

The rest of the day will be all about the sessions. As an organizer you should be making rounds ensuring that everything is going well. Your biggest issue during this time will be technology issues – people not being able to connect to a projector, laptops crashing, projector bulbs burning out, etc. If you can, have a backup projector of your own on hand and a back up laptop so that worst case scenario files can be transferred over. At one code camp I had a presenter have his VGA port die between when he successfully practiced that morning to when his session was that afternoon. Weird stuff can happen.

Also gauge social media – watch for how people are enjoying the event and if they post any concerns or issues.

12:00 PM – 1:00 PM

Lunch time! Use this time to make any announcements, updates, or reminders in the plenary room.

1:00 PM – 4:30 PM

Same as the morning sessions.

4:30 PM – 5:00 PM

Here you bring everyone together for a wrap-up. Here’s where you thank everyone again for coming, thank the speakers and sponsors, review the days events, and do any prize draws.

Don’t be surprised if the number of people you started the day with is smaller at the end. Not everyone can make it the whole day, and that’s ok.

Once you’re done, all materials you had are packed up and you’re ready to leave, this is a great opportunity to go out somewhere for after-code-camp beers/food/whatever and continue the awesome community building!

Post-Event

Once the event is done, your role as organizer isn’t. There’s still some after-event items you need to do.

Send out an email to your sponsors thanking them for their support and also providing information on how well the event went. Sponsors want to know that their sponsorship dollars were put to good use, so let them know!

Send out an email to attendees thanking them for coming out and encouraging them to continue the conversations started at the code camp – give links to various user groups in the community, show where they can get session materials, and let them know where to submit feedback for a post-event survey (you should have a post-event survey btw…Survey Monkey is great for this).

Do a post-event review with the other organizers. Talk about what you could do better next year, what you’d want to keep the same, and how you can make the event better.

Do you have money left over? You shouldn’t but if you do figure out what to do with it. Finding a local tech-related (or not) charity to donate the money to is a good option if you have no other ideas. Remember that the idea is to have no left-over dollars by the end of the event.

And That’s It!

We covered a LOT of information in this post, and if you felt a little hesitant before you may be feeling very hesitant now. PLEASE DON’T BE! Running the Winnipeg Code Camp was one of the most rewarding and fun experiences for me, and putting on a code camp can be a great experience for you too! If you have any questions or comments, or want to discuss in more detail how to get your code camp or one-day event off the ground, please either leave a comment below or hit me up on Twitter!

Thanks for reading!

D

Viewing all 3624 articles
Browse latest View live