Planet FoxPro

July 23, 2014

Beth Massi - Sharing the goodness

Trip Report: Community Leadership Summit 2014

Twitter0062dc3

I had the opportunity to attend the Community Leadership Summit in Portland, Oregon this last weekend, held right before OSCON. This free event brings together all kinds of community leaders, not just technical communities, but anyone that is interested in growing and empowering a strong community. I know I usually write up trip reports of technical conferences, but this particular conference was such a refreshing experience for me I had to tell others about it here.

The lead organizer, Jono Bacon, is a true leader in community management. He was the Ubuntu Community Manager at Canonical for many years and has a ton of experience in open source software communities. He now works as Senior Director of Community at the XPRIZE Foundation. This event brings together leaders in community management to “discuss, debate and continue to refine the art of building an effective and capable community.

CLS has an open, unconference style where everyone who attends is encouraged to lead discussions and contribute sessions on whatever topic they find relevant. These discussions happen in a circle of chairs where everyone can participate freely. There are also more structured 5 and 15 minute presentations that leaders in the field present in an area of their expertise. Of course, the best part of any conference is the networking opportunities.

Here are some of my favorite parts of CLS.

Unconference-style format. Planning each day’s sessions. Capturing learnings.

I’ve been to a couple unconferences before but I think this one was the most organized. The basic idea of an unconference is that the attendees themselves propose topics and sign up to lead the discussions. CLS was held in a couple (large) rooms at the Oregon Convention Center. Those rooms (plus the wide hallway) were broken up into 10 areas with chairs organized in a circle. Signs above each area designated the session number (Session 1, … Session 10).

Each session was an hour with 30 minute breaks in between and an hour lunch break. So the schedule board reflected that. In order to fill the schedule, the organizers made session cards available where you could write down the topic you wanted to lead. I’d say over a third of the attendees proposed sessions!

To kick it off, Jono spoke to everyone on the importance of community management and set the tone for the entire conference. Then it was up to us, the attendees, to create content. It took about an hour to fill the schedule for the day. Here’s how it worked.

  • Those interested in leading discussions wrote their topic on a session card.
  • Those people then lined up in front of a mic and quickly introduced themselves and their topic to the audience (about 1-2 minutes per person).
  • Jono then took just those folks over to the schedule board and similar topics were combined and figured out timing. That way if one leader wanted to attend another discussion at the same time they could fix that.
  • When finished, all the attendees are invited to the schedule board to figure out the sessions they want to go to.

image

Most of the sessions were ad-hoc discussions around the topic but some leaders had more structured activities like post-it note brainstorming sessions. 15 minutes before the end of the hour, the organizers rang a bell to signal wrap-up and then rang it again when the hour was up.

Another session format that you could sign up for were 15 minute plenaries and 5 minute “lightning talks”. The hour after lunch was reserved for these talks which could include a presentation. (I find it amazing that only one projector was needed for the entire conference!)

In order to capture notes and learnings from each session, we simply added notes to http://www.communityleadershipforum.com – the forum Jono created before the conference. I encourage you to take a look at the notes and ask questions.

I think for this particular conference this format worked extremely well because it was all about discussing best practices and being excellent to each other. If you’re interested in running a CLS conference in your local area, Jono announced the CLSx local event format and was hoping to get at least 5 volunteers. Turns out there were more than 15! If you’re interested in organizing one, see http://www.communityleadershipforum.com/t/clsx-license-1-0-feedback-welcome/177 

Learning about community challenges in Open Source projects.

Content-wise I think the connection with folks running OSS communities was really helpful for me. It seems that most community leaders of OSS projects struggle with similar issues – number one being attracting and retaining contributors. (This actually extends to any community that relies on volunteers.) Rewards and recognition is an important part of that retention as well as a solid set of expectations for achievements. Recognizing any contribution, large or small, from code contributions to a simple bug report, is very important. 

Visibility is also challenging because most people contributing to an OSS project are busy developers that don’t necessarily want to worry about (or have time for) “marketing” their project. This is why many projects are backed by companies or Foundations which can help a lot here.

There are also a slew of tactical things to do like setting up a smooth development infrastructure so that you can make it as easy as possible for people to contribute. Another interesting debate was around contributor license agreements and how there seems to be a movement away from them to lower the bar and make it as simple as possible for people to contribute. It’s an interesting, complicated, legal & social debate for sure.

Check out some of these session notes and ask questions on the forum.

http://www.communityleadershipforum.com/t/how-to-move-from-proprietary-to-open-source-development/301

http://www.communityleadershipforum.com/t/building-a-developer-ecosystem/257

http://www.communityleadershipforum.com/t/session-9c-how-to-grow-a-developer-community/292

http://www.communityleadershipforum.com/t/2b-managing-non-technical-communities-in-foss/282

Amazing people.

I met a ton of people at CLS. I tended to gravitate toward the technical folks, but I did have a couple very interesting chats with community leaders of inner city volunteer programs as well. It was definitely the people at this conference that made it so unique for me. There were a ton of small startups and OSS project leaders, but there we’re also some big companies like Oracle and Adobe there (and some good people working in communities there too). I got to meet the person behind @Java, Tori, which was pretty cool. She proposed a session called “Working at the Deathstar – Managing communities for large companies”. I can relate! :-)

I think I was the only person from Microsoft there. Now THAT was different. I introduced myself as “Beth Massi, Microsoft Developer Division” and I had a couple stares like “what the hell is Microsoft doing here?”. I explained that we’re doing a lot with OSS these days (i.e. .NET Foundation) and I was here to learn more about those communities. After that, everyone was extremely supportive and a few people even congratulated Microsoft for our participation in OSS and our long time sponsorship of OSCON. A gentleman from Mozilla even walked up to me as I was heading to a session and shook my hand and said “Thank you Microsoft for your work in Open Source. Please keep it up!”

That was awesome.

Of course, a conference isn’t complete without social activities! I chatted with some awesome people from Chef, Meteor, Mozilla, Neo4j, OpenStack and many others at the evening event on Friday. I even went dancing with Jono and company – he’s a good dancer but not as good as me ;-).

WP_20140719_006Twitterab91585Twitterb0465dfTwitter9ed5348WP_20140719_009image

Thanks to everyone, particularly Jono Bacon, for a great summit! Here’s to new friendships! See you next time.

Enjoy!

by Beth Massi - Microsoft at July 23, 2014 10:57 PM

Alex Feldstein

July 22, 2014

Alex Feldstein

July 21, 2014

Alex Feldstein

July 20, 2014

Alex Feldstein

45 years ago today, Apollo 11 mission landed on the moon

Apollocrew

Collins, Aldrin, and Armstrong

(photo credit: NASA)

Dating myself, I remember watching the live feed from the first steps on the moon.

by Alex Feldstein (noreply@blogger.com) at July 20, 2014 12:11 PM

Rick Strahl's FoxPro and Web Connection Web Log

Creating .DLL Script maps with the Web Connection .NET Managed Module

On IIS 7 and later, Web Connection’s preferred connector interface is the .NET connector, which internally uses an ASP.NET HttpHandler implementation to interface between IIS and the Web Connection server. Functionality wise the .NET connector has the same feature set as the ISAPI one, plus some additional features, but the main reason it’s the preferred choice is because it’s much easier to set up with the default configuration of IIS, and – surprisingly it actually offers better performance and stability than the ISAPI implementation especially in COM mode.

The managed module also works with IISExpress out of the box, so in a development environment you can easily use a non-system level tiny IIS Express implementation that’s easy to run and configure vs. having to deal with the full IIS environment configuration (which can be a bit daunting).

Lots of wc.dll Apps still around

I do quite a bit of IIS configuration support these days – a lot of people are finally upgrading ancient servers to newer versions of Windows, typically Windows 2012 these days. One issue that has kept some people from updating to the .NET managed module has been, that especially older Web Connection applications, are accessing the raw wc.dll directly as part of the URL. I’ve long recommended this as a bad practice, because accessing the ISAPI DLL is both a potential configuration issue as IIS makes it hard to access DLLs directly and in some cases disallows it altogether. But nevertheless there are a lot of old applications out there that still use direct wc.dll links. Direct DLL links also make it much more difficult to deal with paths as you always have to reference a physical location for the dll – script maps are much more flexible as they can be called from anywhere so path dependencies just go away in many cases.

Using a *.dll Script Map with the .NET Managed Module

So what if you have a complex application and you can’t (or won’t) give up the wc.dll links in your application?

Today I was working with a customer through some configuration issues. We started discussing the application, and as I always do I recommend using the .NET module instead of ISAPI. We went over the improvements and why it’s great, and we both agreed that this would be great, but what about our .DLL extensions in the URL?

After some problems with the ISAPI configuration, I actually went ahead and set up the .NET Handler configuration to ascertain that things were working and running and sure enough with the .NET module things were just working. Since it worked with the module -in a flash of enlightenment - we decided to try to create a script map for .DLL and point it at the Web Connection .NET handler.

And lo and behold – the .dll Script mapping mapping worked perfectly!

It’s absolutely possible to create .DLL script map to a .NET Managed handler, which makes it possible to run existing Web Connection applications that use wc.dll directly on the .NET managed module.

Here’s what the relevant web.config settings look like (or you can use the IIS Admin interface to create this as well):

<configuration> 
<system.webServer> <!-- use this on IIS 7.5 and later --> <httpErrors existingResponse="PassThrough" /> <!-- IIS 7 Script Map Configuration --> <handlers> <add name=".wc_wconnect-module" path="*.wc" verb="*"
type="Westwind.WebConnection.WebConnectionHandler,WebConnectionModule" preCondition="integratedMode"/> <add name=".dll_wconnect-module" path="*.dll" verb="*"
type="Westwind.WebConnection.WebConnectionHandler,WebConnectionModule" preCondition="integratedMode"/>
</handlers> </system.webServer> </configuration>

When creating a script map to a .NET handler a  DLL script handler is just like any other script map. Using the mapping you can run requests like this:

http://localhost/wconnect/somepage.dll

Ok that looks weird and is probably not your typical use case, but it actually works. More interestingly though you can do this:

http://localhost/wconnect/wc.dll?_maintain~ShowStatus

When this page comes up you’ll see that it loaded the .NET Managed handler, even though we referenced wc.dll! For this to work – you’ll want to make sure you remove the physical wc.dll from disk and have the webconnectionmodule.dll in the bin folder of your site.

This means that if you have an existing application that used any wc.dll links directly you can now use them with the .NET handler. Additionally you can start to clean up your application to use scriptmaps instead of the DLL link in the first place.

Why you should use ScriptMaps instead wc.dll Links

As already mentioned ISAPI configuration is getting more and more tricky as IIS versions progress. IIS today mainly relies on .NET to handle extensibility to other services and interfaces and ISAPI is more of an deprecated prototocol. It’s still there but it’s an optionally installed feature. Further  if you are accessing a DLL directly using ISAPI (not through a scriptmap) you are directly accessing a binary file which is generally discouraged. In order for this to work you have to explicitly enable generic ISAPI extensions in the IIS configuration.

Scriptmaps are simply a nicer way to write URLs. Instead of the ugly ~ syntax of:

wc.dll?ProcessClass~Method~&Action=Edit

you can use scriptmap to method mapping:

Method.sm?Action=Edit

where sm is a scriptmap, and method is the method of the process class that sm is mapped to. The URLs are much cleaner and easier for users to parse and understand.

Finally script maps allow you to simplify relative paths. Because a script map is not referencing a physical file in a specific folder like wc.dll, you can reference a scriptmap from any location and have it behave the same way. This means if you create a page in a subdirectory and you want to access the scriptmapped page you can use the same path and it will work. IOW:

Method.sm?Action=Edit
Admin\Method.sm?Action=Edit
Account
\Method.sm?Action=Edit

all are treated exactly the same. The only difference is that the script path passed as part of the server variables will point to a different location, so Request.GetPhysicalPath() will point at the site relative physical path on disk. Otherwise each of those three commands are identical.

Scriptmaps are the way to go!

Scriptmaps make life easier and once again I urge you, if you’re not using them to think about integrating them in your Web Connection applications. I suspect sometime in the not too distant future IIS will stop supporting direct access .DLL links and will force all operation to occur against script maps. Plus the easier usage for referencing dynamic links make code much more portable across sites or virtual directories if your development and live environment aren’t 100% identical.

by Rick Strahl at July 20, 2014 07:49 AM

Alex Feldstein

July 19, 2014

Alex Feldstein

July 18, 2014

Alex Feldstein

July 17, 2014

Craig Bailey

Microsoft. Saves You Time

I’d love that as a new tagline.

I’ve really enjoyed reading some of the analysis of Satya’s memo. In addition to the ones I mentioned in my earlier post, I’ve found these to be particularly interesting: this, this and this.

There seems to be three main areas people are focussing on:

  1. It’s wordy and bloated
  2. There’s going to be job cuts
  3. Oh, and we’re not selling Xbox

I’m only interested in the first point, since it was the main reaction I had as well. There’s so much talk of experiences and productivity and empowerment, that it quickly becomes meaningless. The most obnoxious get their own callouts in the memo as well:

“Developers and partners will thrive by creatively extending Microsoft experiences for every individual and business on the planet.”

and the ‘core’ message seems to be:

At our core, Microsoft is the productivity and platform company for the mobile-first and cloud-first world. We will reinvent productivity to empower every person and every organization on the planet to do more and achieve more.

Which is fine I guess. Unless you try to work out what it means.

When I try to make it meaningful, the main summary I’m getting is that Microsoft wants to save us time. After all Satya is right to highlight that attention is our scarcest resource. And thus, when I think of Microsoft, I’d love to think of it as a company that saves me time.

As a consumer, as a home user, as a business user, as an administrator, as a developer, as a gamer.

Forget creatively extending digital productivity via reinvention and empowerment.

Instead, just Save Me Time.

The post Microsoft. Saves You Time appeared first on Craig Bailey.

by Craig Bailey at July 17, 2014 06:21 AM

Google Fucktardery

The always awesome Aaron Wall lets loose with another blistering rebuke of Google’s regular business practices.

When Google complains about censorship, they are not really complaining about what may be, but what already is. Their only problem is the idea that someone other than themselves should have any input in the process.

Definitely worth a read.

It’s a tricky area for me, since in large part I make my living dealing with both the good and bad of Google. The pendulum swings both ways of course, but in recent years they’ve swung firmly into evil territory.

The post Google Fucktardery appeared first on Craig Bailey.

by Craig Bailey at July 17, 2014 05:49 AM

Alex Feldstein

July 16, 2014

Alex Feldstein

July 15, 2014

Rick Strahl's Web Log

West Wind WebSurge - an easy way to Load Test Web Applications

A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up.

It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge.

I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool.

If you’re in a hurry and you want to check it out, you can find more information and download links here:

For a more detailed discussion of the why’s and how’s and some background continue reading.

How did I get here?

When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios.

When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg…

I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous.

When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use.

As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option.

Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in.

I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware.

West Wind Web Surge – Making Load Testing Quick and Easy

So I ended up creating West Wind WebSurge out of rebellious frustration…

The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests.

I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs.

A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail:

Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance.

The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works.

Session Capturing

As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like:

Sessions

You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool.

With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps.

Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls.

The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests.

SSL Captures require Fiddler

Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge.

Session Storage

A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing.

Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from.

The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge.

Urls can also be edited interactively so you can modify the headers easily as well:

RequestEditing

Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here.

Use it for single URL Testing

Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively.

Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks.

Overriding Cookies and Domains

Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie:

CookieOverride

which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work.

Running Load Tests

Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test.

While sessions run some progress information is displayed:

Progress

By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests.

Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running.

Test Results

When the test is done you get a simple Results display:

WebSurgeResult

On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk.

The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly.

Each item in the list can be clicked to see the full request and response data:

WebSurgeRequestDetail

This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send.

You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests.

Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:

Chart 

Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete.

Command Line Interface

WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far.

There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file.

Console

By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results.

The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc.

It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI.

Current Status

Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress.

I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison…

The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-).

The code will likely be released with version 1.0.

I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section

Microsoft MVPs and Insiders get a free License

If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to sales@west-wind.com.

Resources

For more info on WebSurge and to download it to try it out, use the following links.

© Rick Strahl, West Wind Technologies, 2005-2014
Posted in ASP.NET  

by Rick Strahl at July 15, 2014 09:06 AM

The Problem Solver

X things every JavaScript developer should know: Comparisons

Another item of things every JavaScript developer should know is how comparisons work. Just like with some of the other JavaScript, or I should really say ECMAScript, features anything you know about C# or Java could actually be misleading here.

 

To == or to ===

One of the weird things is there are actually two comparison operators in JavaScript, the double and the triple equals. The == is called the equals operator, see section 11.9.1 of the ECMAScript standard, and was the original equality operator. Unfortunately the way this operator works is quite some cause for confusion and as a result the === or Strict Equals operator was introduced, see section 11.9.4 of the ECMAScript standard. It would have been nice if they had just fixed the original problem but if they had they would have broken existing JavaScript applications.

In general I would always advise you to use the Strict Equals Operator or === whenever you do a comparison unless you have a specific need for the behavior or the original operator.

 

What is the problem with ==

I mentioned that == has problems and should be avoided but its still helpful to understand these problems. These problems basically boil down to the fact that the == operator does type conversions if the two types being compared are not the same. For example the following all evaluate to true:

   1: 0 == "0" // true
   2: 1 == "1" // true
   3: 2 == "2" // true

Sounds reasonable enough right?

 

Unfortunately it isn’t quite that simple all of the following evaluate to false:

   1: false == "false" // false
   2: true == "true" // false

These might seem weird, especially since the following evacuates to true again:

   1: true == !!"true" // true
 
So what is going on here?
 

The Abstract Equality Comparison Algorithm

Section 11.9.3 of the ECMAScript standard describes what is happening here. If one operand is a number and the other a string, as was the case in the first examples, the string is converted to a number and the comparison is done based on those. So basically these comparisons where:
   1: 0 == 0 // true
   2: 1 == 1 // true
   3: 2 == 2 // true

 

So what was the case in the other two comparisons?

In these cases almost the same happens and the Boolean values are converted to a number. That leaves a number to string comparison where the string is also converted to a number. And the result of converting true and false to a number is 1 and 0 but the result of the string to number conversions is an invalid number or NaN. And NaN being not equal to any other number means those comparisons result in false.

So why did the last comparison true == !!”true” evaluate to true? Well simple the double bang operator !! is evaluated first and a non empty string is truthy. End result of that is the expression true == true and that is obviously true. Sounds reasonable but that also means that any non empty string will result in true, so even true == !!"false" evaluates to true :-(

 

Conclusion

The double equality operator is a confusing part of the JavaScript history. You are best of to avoid it an use the Strict Equals Operator === instead.

 

Enjoy!

by Maurice at July 15, 2014 08:52 AM

Alex Feldstein

July 14, 2014

Craig Bailey

VisualFoxProWiki

ReplaceCommand

The REPLACE command lets you fill in a set of fields with specified values. It can operate on a specified record, the current record, a range of records (Scope Clauses) or a set of records (FOR Contactdate = DATE()). and  class= even multiple related tables simultaneously.

 class= REPLACE apparently has a limitation of REPLACEing only up to 128 fields in a single command. I didn't find this documented in the VFP 7.0 or 8.0 help files or a few other places I looked. -- Andy Needham
If you attempt to REPLACE more than 128 fields in a single command, it will generate a Syntax Error -- Mike Yearwood

July 14, 2014 10:00 PM

UpcomingEvents

Editor comments: Added July 2014 LA Fox meeting
A place to list upcoming Visual FoxPro events like conferences, meetings, user groups, open training sessions...
Closest at the top please, and please remove past events.

July 14, 2014 06:49 PM

Alex Feldstein

Photo of the Day


Juvenile tri-color heron brothers
Wakodahatchee wetlands

by Alex Feldstein (noreply@blogger.com) at July 14, 2014 12:19 PM

Articles

SUPINFO International University in Mauritius

Since a while I'm considering to pick up my activities as a student and I'd like to get a degree in Computer Science.

Personal motivation

I mean after all this years as a professional software (and database) developer I have the personal urge to complete this part of my education. Having various certifications by Microsoft and being awarded as an Microsoft Most Valuable Professional (MVP) twice looks pretty awesome on a resume but having a "proper" degree would just complete my package. During the last couple of years I already got in touch with C-SAC (local business school with degree courses), the University of Mauritius and BCS, the Chartered Institute for IT to check the options to enroll as an experienced software developer. Quite frankly, it was kind of alienating to receive that feedback: Start from scratch!

No seriously? Spending x amount of years to sit for courses that might be outdated and form part of your daily routine? Probably being in an awkward situation in which your professional expertise might exceed the lecturers knowledge? I don't know... but if that's path to walk... Well, then I might have to go for it.

SUPINFO International University

Some weeks ago I was contacted by the General Manager, Education Recruitment and Development of Medine Education Village, Yamal Matabudul, to have a chat on how the local IT scene, namely the Mauritius Software Craftsmanship Community (MSCC), could assist in their plans to promote their upcoming campus. Medine went into partnership with the French-based SUPINFO International University and Mauritius will be the 36th location world-wide for SUPINFO.

Actually, the concept of SUPINFO is very likely to the common understanding of an apprenticeship in Germany. Not only does a student enroll into the programme but will also be placed into various internships as part of the curriculum. It's a big advantage in my opinion as the person stays in touch with the daily procedures and workflows in the real world of IT. Statements like "We just received a 'crash course' of information and learned new technology which is equivalent to 1.5 months of lectures at the university" wouldn't form part of the experience of such an education.

Open Day at the Medine Education Village

Last Saturday, Medine organised their Open Day and it was the official inauguration of the SUPINFO campus in Mauritius. It's now listed on their website, too - but be warned, the site is mainly in French language although the courses are all done in English. Not only was it a big opportunity to "hang out" on the campus of Medine but it was great to see the first professional partners for their internship programme, too. Oh, just for the records, IOS Indian Ocean Software Ltd. will also be among the future employers for SUPINFO students. More about that in an upcoming blog entry.

Open Day at Medine Education Village - SUPINFO International University in Mauritius
Open Day at Medine Education Village - SUPINFO International University in Mauritius

Mr Alick Mouriesse, President of SUPINFO, arrived the previous day and he gave all attendees a great overview of the roots of SUPINFO, the general development of the educational syllabus and their high emphasis on their partnerships with local IT companies in order to assist their students to get future jobs but also feel the heartbeat of technology live. Something which is completely missing in classic institutions of tertiary education in Computer Science. And since I was on tour with my children, as usual during weekends, he also talked about the outlook of having a SUPINFO campus in Mauritius.

Apart from the close connection to IT companies and providing internships to students, SUPINFO clearly works on an international level. Meaning students of SUPINFO can move around the globe and can continue their studies seamlessly. For example, you might enroll for your first year in France, then continue to do 2nd and 3rd year in Canada or any other country with a SUPINFO campus to earn your bachelor degree, and then live and study in Mauritius for the next 2 years to achieve a Master degree.

Having a chat with Dale Smith, Expand Technologies, after his very interesting session on Technological Entrepreneurship - TechPreneur
Having a chat with Dale Smith, Expand Technologies, after his interesting session on Technological Entrepreneurship - TechPreneur

More questions by other craftsmen of the Mauritius Software Craftsmanship Community
More questions by other craftsmen of the Mauritius Software Craftsmanship Community

And of course, this concept works in any direction, giving Mauritian students a huge (!) opportunity to live, study and work abroad. And thanks to this, Medine already announced that there will be new facilities near Cascavelle to provide dormitories and other facilities to international students coming to our island. Awesome!

Okay, but why SUPINFO?

Well, coming back to my original statement - I'd like to get a degree in Computer Science - SUPINFO has a process called Validation of Acquired Experience (VAE) which is tailor-made for employees in the field of IT, and allows you to enroll in their course programme. I already got in touch with their online support chat but was only redirected to some FAQs on their website, unfortunately.

So, during the Open Day I seized the opportunity to have an one-on-one conversation with Alick Mouriesse, and he clearly encouraged me to gather my certifications and working experience. SUPINFO does an individual evaluation prior to their assignment regarding course level, and hopefully my chances of getting some modules ahead of studies are looking better than compared to the other institutes. Don't get me wrong, I don't want to go down the easy route but why should someone sit for "Database 101" or "Principles of OOP" when applying and preaching database normalisation and practicing Clean Code Developer are like flesh and blood?

Anyway, I'll be off to get my transcripts of certificates together with my course assignments from the old days at the university. Yes, I studied Applied Chemistry for a couple of years before intersecting into IT and software development particularly... ;-)

by Jochen Kirstaetter (jochen@kirstaetter.name) at July 14, 2014 06:23 AM

Andrew Coates ::: MSFT

TechEd Australia Call for Topics Coming Soon (but don't wait)

TechED_WebBanner

By now, I'm sure you know that TechEd Australia is a different format this year. We're running essentially the same show twice in October – in Melbourne on October 7th & 8th, and in Sydney on October 27th & 28th. Being only 2 days, we've concentrated on delivering content focused on a single theme. The theme for this conference is Transforming with the Cloud.

One of the ways we're focusing the content is that we're only running 4 main tracks this time around, and each of them will consist of around 12 deeply technical sessions. There are a couple of kinks to iron out in the "call for content" tool but, as content owners, we (see below for who we are and what the tracks are) want to give you as much time to submit great sessions as possible. We'd love to hear from you if you've got a great session or sessions to deliver that align to one or more of the tracks at the end of this post (note that the track descriptions are works in progress still, but again, we wanted to get this out to you ASAP).

Next Steps

If you've got a great idea for a session or sessions that fit into the tracks below, please send email to the track owner or track owners:

  • One email per session proposal
  • If you can't decide which track(s) a session should be in, put the owners of both tracks on the To line of the email

The Tracks

Datacenter and Infrastructure Management

Track Owner: Andrew McMurray andrew.mcmurray@microsoft.com

Cloud computing models are changing the technology landscape and providing more opportunity than ever for IT to deliver impact to the business. The Datacenter and Infrastructure Management track dives into evolving enterprise datacenter concepts built on cloud technologies with Windows Server 2012 R2, Microsoft System Center 2012 R2, Microsoft Azure Pack, and Microsoft Azure. This track empowers IT professionals, technical decision makers, and business decision makers to understand how our technologies come together as a solution to help them provision their cloud and datacenter infrastructures, provide business continuity, deliver services, and manage applications. Sessions include breakthrough capabilities in the areas of storage, networking, virtualization, management, and automation. Attendees also learn how the latest innovations bring together on-premises approaches and cloud-based technologies to deliver hybrid solutions that span multiple products and leverage existing resources.

Developer Platform and Tools

Track Owners: Rocky Heckman rocky.heckman@microsoft.com and Esther Mosad esther.mosad@microsoft.com

The Developer Platform and Tools track brings the developer focused sessions to TechEd. Learn how to build devices- and services-based applications using technologies that span the breadth of the Microsoft platform including: Windows 8.1, ASP.NET , Microsoft Azure, and Windows Phone  as well as cross platform development for mobile devices. Learn how to take advantage of the application lifecycle management and latest productivity enhancements in Visual Studio and Team Foundation Server. Whether you are building consumer facing applications, line-of-business applications, or mission critical applications this track provides you with what you need to move your projects and your career forward.

Office Servers and Services

Track Owner: Mario Tevanian mario.tevanian@microsoft.com

The Office Servers and Services track combines all familiar Microsoft Office related server, services and clients into one. Products covered include: Office 365, Office client apps, Microsoft Exchange, Microsoft SharePoint, Microsoft Project, Microsoft Visio, Yammer, and Microsoft Lync. Gain a deeper understanding of new features, deployment, management, and administration across all of these productivity tools—whether in the cloud, on-premises, or in hybrid scenarios.

Windows, Phone and Devices

Track Owner: Andrew Coates andrew.coates@microsoft.com

The Windows, Phone and Devices track provides the knowledge Professional Developers need to understand the capabilities that Windows and Windows Phone devices have to offer, as well as guidance for developing line-of-business apps for Windows and Windows Phone, and for publishing apps to the Windows and Windows Phone Stores. For this new devices and services world, we include deep technical content for enabling on-premises and cloud-based solutions, while also providing the key information to understand Windows operating systems architectures, technologies, and usage scenarios

by Andrew Coates [MSFT] at July 14, 2014 05:24 AM

July 13, 2014

Alex Feldstein

July 12, 2014

Alex Feldstein

July 11, 2014

Alex Feldstein

Photo of the Day


Tri-color heron (Juvenile)
Wakodahatchee wetlands
Delray, Florida

by Alex Feldstein (noreply@blogger.com) at July 11, 2014 05:00 AM

July 10, 2014

Craig Bailey

What does ‘mobile first, cloud first’ even mean?

Satya Nadella’s staff memo is good reading. Joshua Topolsky’s brief interview on  The Verge is also good. And opinion pieces like this also raise some good questions.

If (like me) you’ve been a little disillusioned with Microsoft over the past few years, then it’s heartening to read, since it gives confidence into how Microsoft is improving focus and internal processes.

But one thing that’s clear is that there’s no escaping the ‘mobile-first, cloud-first’ mantra. The line is reasonable of course, but the issue with hammering the phrase ‘mobile-first, cloud-first world’ is it becomes almost meaningless.

I’ve been chatting with a number of people lately (including Microsoft friends) and this phrase gets trotted out all the time – I’m pretty sure most people have no real understanding (or at least no consistent understanding) of what it means.

After all, what does it mean? It could mean anything. It’s like saying we live in an ‘app first’ or ‘touch first’ or ‘lifestyle first’ world – terms that aren’t necessarily wrong, but just meaningless after a while. And saying that two things are first is also an odd choice. I obviously don’t get it…

Next time you speak to someone who lobs that phrase into a conversation, ask them what it really means – you’ll likely get a woffle*-infused barrage of cliches (it’s the ‘evolution of devices and services that fuses productivity with digital experiences’ etc) and nothing of substance.

 

(Notes: *waffle is actually the correct spelling, but it just makes me hungry)

The post What does ‘mobile first, cloud first’ even mean? appeared first on Craig Bailey.

by Craig Bailey at July 10, 2014 10:48 PM

Alex Feldstein

July 09, 2014

Alex Feldstein

Photo of the Day


Tri-color heron (Juvenile)
Wakodahatchee wetlands
Delray, Florida

by Alex Feldstein (noreply@blogger.com) at July 09, 2014 05:00 AM

July 08, 2014

FoxProWiki

ComboBox

Practical advice on using the Combo Box and List Box controls

July 08, 2014 10:43 PM

CULLY Technologies, LLC

Linux Mint 17 is much faster, except …

I finally sucked it up and upgraded my development laptop from Ubuntu 13.10 with the Unity interface to LinuxMint 17 with the Cinnamon interface. I am very happy. With Unity, I was always wondering if my hard drive was failing. Opening programs was soooo slow. I couldn’t stand it. Now with Mint, things are very speedy. All of this speed might be because it is a new installation, but I’m pleased none-the-less.

The actual installation of Mint took about 20 minutes. I AM SERIOUS. 20 minutes! Copying my home directory back onto the laptop took a couple of hours. If I had a few hundred dollars, I would have loved to upgrade my HD to a SSD drive. I think it would have done wonders. The 8 core CPU certainly isn’t hardly being taxed. Even my 8G of RAM isn’t too shabby. I’m thinking this 1T HD is the bottleneck. Yes, it has an 8G SSD drive that is used for the swap, and the master boot record but I think I’m somehow under utilizing it. If it was a tad larger, I’d install the OS to it and move my /home directory onto the 1T drive.

Here is my “Except…”
I have written a small utility in my favorite language of Xojo. The utility does some simple things in Xojo and times the number of operations per second thus it is a benchmarking application. Get it here for Windows, Mac and Linux: ctBenchMark

I was a little disappointed to be honest. I thought that Mint would be faster. It certainly is when running other Linux applications. I figure the difference is that Xojo applications are only 32bit for now. 64bit is coming sometime in 2015. (My estimated release date, not necessarily Xojo’s.)

Here are my averages:

Platform Win Move Test DB Test Math Test
Ubuntu 13.10 Unity 3,486 / sec. 9 / sec. 536,525
LinuxMint 17 Cinnamon 3,096 / sec. 8 / sec. 263,761

So, how is the performance so much better with 64bit applications, but so similar (except for a drop in Math test scores) for the Xojo applications? I figure that Mint hasn’t done any optimization on 32bit applications and just uses the same Ubuntu and Debian layer of classes that Ubuntu (naturally) uses.

As I said, I was a little disappointed. It took the wind out of my sails a bit. Anyway, the VMs load a lot quicker. I’ll just have to hold on for the 64bit Xojo appplications to come sometime next year. It won’t be too soon.

I’m still a Mint fan, and I’m still a Xojo fan. We can always use more speed no matter where it comes from … or even when.

by kcully at July 08, 2014 06:22 PM

Alex Feldstein

July 07, 2014

The Problem Solver

Converting the RavenDB Northwind database to a more denormalized form

In a previous blog post I demonstrated how to denormalize the RavenDB sample database and use the DenormalizedReference<T> and INamedDocument types from the RavenDB documentation to make life really sweet. That leaves us with one small problem and that is that the original sample database doesn’t work with our improved document design. With the sample database, small as it is, loading all document as a dynamic type, converting them and saving them would be easy enough but in a real database that would not be practical. So lets look at a better solution fixing the database.

 

Updating the database on the server

Instead of downloading each document, updating the structure and saving it back to the server it is much better to do these sort of actions on the server itself. Fortunately RavenDB has the capability to execute database commands on the server. These update commands can be PatchRequest objects that will let you do a large number of things using a nice C# API. And a the ultimate fallback there is the ScriptedPatchRequest which will let you execute a block of JavaScript code on the server. Why JavaScript? Well RavenDB stores things in JSON and the server is really not dependent on a .NET client.

Using the ScriptedPatchRequest we can either execute a patch on a single document or on a collection of documents. In this case I want to update all Order documents to reflect their new structure. It turns out this is quite simple

 

   1: using (IDocumentStore documentStore = new DocumentStore
   2: {
   3:     ConnectionStringName = "Northwind"
   4: }.Initialize())
   5: {
   6:     var javaScript = @"...";
   7:     
   8:     documentStore.DatabaseCommands.UpdateByIndex(
   9:         "Raven/DocumentsByEntityName",
  10:         new IndexQuery
  11:         {
  12:             Query = "Tag:Orders"
  13:         },
  14:         new ScriptedPatchRequest
  15:         {
  16:             Script = javaScript
  17:         });
  18: }

This code will execute the JavaScript code to patch the document once for each document in the Orders collection.

 

The JavaScript code to execute is quite simple, just make the changes required to the document and you are set.

   1: var company = LoadDocument(this.Company); 
   2: this.Company = {Id: this.Company, Name: company.Name};
   3:  
   4: var employee = LoadDocument(this.Employee);
   5: this.Employee = {Id: this.Employee, Name: employee.FirstName + ' ' + employee.LastName};
   6:  
   7: var shipVia = LoadDocument(this.ShipVia); 
   8: this.ShipVia = {Id: this.ShipVia, Name: shipVia.Name};
   9:  
  10: this.Lines.forEach(function(line){
  11:     var product = LoadDocument(line.Product); 
  12:     line.Product = {Id: line.Product, Name: product.Name};
  13:     delete line.ProductName;
  14: });

 

In this case I am converting the Company, Employee, ShipVia and Product properties to have the new structure. Additionally I am removing the ProductName from the OrderLine as that is no longer needed.

 

Sweet :-)

by Maurice at July 07, 2014 12:27 PM

Alex Feldstein

July 06, 2014

Alex Feldstein

July 05, 2014

Alex Feldstein

Photo of the Day


Baby purple gallinule
As you can see he is all feet
Wakodahatchee wetlands

by Alex Feldstein (noreply@blogger.com) at July 05, 2014 05:00 AM

July 04, 2014

Craig Bailey

Confidence and Smart People

Read this post by apenwarr on The Curse of Smart People.

This resonates with what I’ve seen:

Smart people have a problem, especially (although not only) when you put them in large groups. That problem is an ability to convincingly rationalize nearly anything.

It’s timely for me, because one of the main things I’m looking for in my life at the moment is interaction with RSPs (really smart people).

I think most techies suffer from a form of Impostor Syndrome, some more so than others. However, whilst in my fantasy moments I like to think that I lack confidence in my abilities, one of my greatest fears is that I’m actually suffering from the first part of the Dunning Kruger effect …

(via Gruber)

UPDATE: I’d hate to be someone who does this kind of shit.

The post Confidence and Smart People appeared first on Craig Bailey.

by Craig Bailey at July 04, 2014 10:07 PM

Alex Feldstein

July 03, 2014

Beth Massi - Sharing the goodness

Office 365 API Tools for Visual Studio - Users and Files

The Office 365 APIs allow you to easily integrate Office 365 services into your apps in a consistent way. You can access user data like calendars, documents and more using REST APIs and standard OAuth flows from any platform. The Office 365 API Tools for Visual Studio make it super easy for developers to access the services via .NET or JavaScript client libraries. These tools are currently in preview.

I've been meeting up with different team members building these tools and have been watching them progress through this preview period. In this interview I sit down with Chakkardeep Chandran (Chaks), a Program Manager on this project, and we talk about working with Users and Files.

Watch: Office 365 API Tools for Visual Studio - Users and Files

Download the Office 365 API Tools for Visual Studio Preview here

And for more information see: .NET and JavaScript Libraries for Office 365 APIs

Have questions? Ask on StackOverflow. (Tag your questions with [Office365] & [API]).

Enjoy!

by Beth Massi - Microsoft at July 03, 2014 04:12 PM

Alex Feldstein

July 02, 2014

FoxProWiki

UpcomingEvents

Editor comments: Philly-no July meeting

July 02, 2014 02:27 PM

VFP Philly

No meeting in July

We're taking off the month of July. Check back soon to see the line-up of Southwest Fox previews for the coming months.

by Tamar E. Granor (noreply@blogger.com) at July 02, 2014 01:26 PM

Alex Feldstein

July 01, 2014

CULLY Technologies, LLC

Repaving your Linux machine: A checklist

Every so often, as a Linux desktop user, it comes time to repave my desktop machine. This means to wipe the hard drive and then install a brand new version of Linux on the hardware. In this case, I am moving from Ubuntu 13.10 and switching to Linux Mint 17 Cinnamon. I know this probably horrifies Windows (and even Mac) users, but GNU/Linux plus all of the free open-sourced software makes this, if not ‘easy’, do-able.

Linux people smarter than I am, recommend re-partitioning the hard drive in half, and then alternating installing Linux to the other partition. I have been doing more video editing so my 1TB hard drive seems to be getting smaller all of the time. I don’t think this is an option for me anymore.

I’ve repaved machines in the past. Some times it has gone swimmingly. Other times, I’ve forgotten a couple of tasks and I’ve had to suffer my lumps. Often times I forget to export the Contacts list from Thunderbird and I have to re-gather the contacts. Some aspects of repaving have gotten much better. Years ago, email switched from POP to IMAP as the preferred method of mail storage. That makes migrating to new machines easier. LastPass (or other password managers) are another leap forward to making repaving easier. Once the browser plugin is installed and logged into, the passwords for the myriad of websites are just there. Nice.

Repaving still is non-trivial. I’m wanting to create a repaving checklist to help me *not* forget a step. To help me increase my chance of a successful migration to another machine, or end up with a repaved machine. Repaving a machine is much more dangerous.

Double Backups
Make sure that you have double backups. Have remote backups of all files, have local backups of all files including Virtual Machines in their entirety. Local backups make restoring a much quicker process. I use CrashPlan for remote backups *and* local backups. Make sure that you test the restoration of the files!!! Without this, it is a false sense of security. I’ve had restorations fail when using other backup systems. Luckily never without a way to recover the files at some point.

Repaving Checklist

  1. Export of Thunderbird Contacts. (ldif file)
  2. Screen capture of Thunderbird account settings
  3. Backup of PostgreSQL data (check restoration!)
  4. Backup of PostgreSQL.conf (/etc/postgresql/9.1/main/postgresql.conf)
  5. FileZilla websites (because their bookmarks suck!)
  6. Full VM backups (this will take some time!)
  7. Password Manager export to CSV
  8. Remote Administration settings (usernames, ip addresses, ect.)
  9. Printer list
  10. Test LiveCD including sound

This list is much smaller than I would think. Linux installs so much for me, that it makes keeping current easier than it would for repaving … say … a Windows machine. It’s nice not having to enter all of those license keys. I’m productive much quicker.

I’ll make notes here as I edit this process. After all, when I’m done with my machine, I need to repave my wife’s laptop as well. Wish me luck!

Post Install Notes
I’ve completed the upgrade! An LinuxMint is *much* faster than Ubuntu with Unity.

Everything went well. I struggled with the security settings and Postgres but I haven’t installed Postgres on my system in years. Here are my notes for the install:

  • Copy the entire /etc/postgresql/9.3/main directory. I needed to recreate some of the settings from the other conf files. Having those as reference would have been handy.
  • Thunderbird Message Filters. I wish there were a way to export those and import those easier. I should have backed up the msgFilterRules.dat file. I’m recreating my rules now which I use heavily. Basically if an email shows up in the Inbox level, then it is probably spam.
  • I really wish I had $250 for a 500G SSD drive, or $450 for a 1T SSD drive. I still suspect that the performance issues with this not-so-old laptop is related to the speed of the HD. Perhaps next repaving. One day, we’ll look back and say “remember the days when computers has spinning hard drives?”

Until the next repaving, take care.

by kcully at July 01, 2014 07:24 PM

The Problem Solver

Denormalizing data in RavenDB

One of the things with RavenDB, or NoSQL document databases in general, is that you don’t do joins to combine data. Normally you try to model the documents you store in such a way that the data you need for most common actions is stored in the document itself. That often means denormalizing data. When you first get started with document databases that feels strange, after all with relational databases we are taught to normalize data as much as possible and not repeat the same values. Where normalizing data is great for updates and minimizing the size of databases it is less than ideal for querying. This is because when querying we need to join various tables to turn abstract foreign keys into something that is actually understandable by the end user. And while relational databases are pretty good at joining tables these operations are not free, instead we pay for the that with every query we do. Now it turns out that most applications are read heavy and not write heavy. And as a result optimizing writes  actually hurts something like 99% of the database operations we do.

With document database like RavenDB we can’t even do a join action. When we normalize data the client actively has to fetch related data and turn those abstract identities to other documents into, for a user, meaningful values. Normally the documents in a RavenDB database are much more denormalized that similar data in a SQL server database would be. The result is that for most operations a single IDocumentSession.Load() is enough to work with a document.

 

That data makes sense to denormalize?

Not everything makes sense to denormalize, normally only relatively static data that is frequently needed is denormalized. Why relatively static data? Simple, every time the master document for that piece of data is updated all documents where it might be denormalized also need to be updated. And while not especially difficult it would become a bottleneck if it happened to often. Fortunately there is enough data that fits the criteria.

 

The RavenDB example data

The de-facto sample data for SQL Server is the Northwind database. And by sheer coincidence it so happens that RavenDB also ships with this same database, except now in document form. With lots of .NET developers being familiar with SQL Server this Northwind database is often the first stop at how a document database should be constructed.

image

As you can see in the screenshot from the RavenDB studio a relatively small number of collections replaces the tables from SQL Server. Nice :-)

image

The structure used to save an order is also nice and simple, just the Order and OrderLine classes saved in a single document.

   1: public class Order
   2: {
   3:     public string Id { get; set; }
   4:     public string Company { get; set; }
   5:     public string Employee { get; set; }
   6:     public DateTime OrderedAt { get; set; }
   7:     public DateTime RequireAt { get; set; }
   8:     public DateTime? ShippedAt { get; set; }
   9:     public Address ShipTo { get; set; }
  10:     public string ShipVia { get; set; }
  11:     public decimal Freight { get; set; }
  12:     public List<OrderLine> Lines { get; set; }
  13: }
  14:  
  15: public class OrderLine
  16: {
  17:     public string Product { get; set; }
  18:     public string ProductName { get; set; }
  19:     public decimal PricePerUnit { get; set; }
  20:     public int Quantity { get; set; }
  21:     public decimal Discount { get; set; }
  22: }

 

One missing thing

Nice as this may be there is one missing thing. Other than the product name being sold and it’s price there is no data denormalized. This means that if we want to display to the user for even the most basic of uses we will need to load additional document. For example the Company property in an order just contains the identity of a customer. If we want to display the order the very least we would have to do is load the company and display the customers name instead of its identity. And the same it true for the employee and shipper.

While this sample database is not denormalized it turns out is is quite easy to do so ourselves.

 

Denormalizing the RavenDB Northwind database

The first step is to store the related name along with each referred to identity as seen below.

image

 

The order is the same but this time we can do common user interaction operations with just the one document and not be required to load additional documents. It turns out this is quite easy to do. The RavenDB documentation has a nice description on how to do that using INamedDocument and DenormalizedReference<T>. Using this technique makes it really easy and consistent to work with denormalized data and create a document structure like the one above. The change to the Order and OrderLine classes are minimal. All I had to do is replace the string type Company property with one of type DenormalizedReference<Company>.

   1: public class Order
   2:  {
   3:      public string Id { get; set; }
   4:      public DenormalizedReference<Company> Company { get; set; }
   5:      public DenormalizedReference<Employee> Employee { get; set; }
   6:      public DateTime OrderedAt { get; set; }
   7:      public DateTime RequireAt { get; set; }
   8:      public DateTime? ShippedAt { get; set; }
   9:      public Address ShipTo { get; set; }
  10:      public DenormalizedReference<Shipper> ShipVia { get; set; }
  11:      public decimal Freight { get; set; }
  12:      public List<OrderLine> Lines { get; set; }
  13: }
  14:  
  15: public class OrderLine
  16: {
  17:     public DenormalizedReference<Product> Product { get; set; }
  18:     public string ProductName { get; set; }
  19:     public decimal PricePerUnit { get; set; }
  20:     public int Quantity { get; set; }
  21:     public decimal Discount { get; set; }
  22: }

 

The DenormalizedReference<T> and INamedDocument are also really simple and straight from the RavenDB documentation.

   1: public class DenormalizedReference<T> where T : INamedDocument
   2: {
   3:     public string Id { get; set; }
   4:     public string Name { get; set; }
   5:  
   6:     public static implicit operator DenormalizedReference<T>(T doc)
   7:     {
   8:         return new DenormalizedReference<T>
   9:         {
  10:             Id = doc.Id,
  11:             Name = doc.Name
  12:         };
  13:     }
  14: }
  15:  
  16: public interface INamedDocument
  17: {
  18:     string Id { get; }
  19:     string Name { get; }
  20: }

 

The implicit cast operator in the DenormalizedReference<T> makes using this really simple. Just assign a property and it will take case of the proper reference needed.

   1: var order = session.Load<Order>("orders/42");
   2: order.Company = session.Load<Company>("companies/11");

 

One useful extension method

Loading the single document and doing common operations should be easy now but there are still operations where you will need more data from the related entity. Loading them is easy enough.

   1: var customer = session.Load<Company>(order.Company.Id);

 

However using the DenormalizedReference<T> the structure and type is already captured in the Order class. Using this with a simple extension method makes the code even simpler which is always nice :-)

   1: public static class IDocumentSessionExtensions
   2: {
   3:     public static T Load<T>(this IDocumentSession session, DenormalizedReference<T> reference)
   4:         where T : INamedDocument
   5:     {
   6:         return session.Load<T>(reference.Id);
   7:     }
   8: }

 

This simple extension method will let is load the customer as follows:

   1: var customer = session.Load(order.Company);

 

Saves another few keystrokes and completely type safe. Sweet :-)

 

Enjoy!

by Maurice at July 01, 2014 09:30 AM

Rick Strahl's Web Log

Project Navigation and File Nesting in ASP.NET MVC Projects

More and more I’m finding myself getting lost in the files in some of my larger Web projects. There’s so much freaking content to deal with – HTML Views, several derived CSS pages, page level CSS, script libraries, application wide scripts and page specific script files etc. etc. Thankfully I use Resharper and the Ctrl-T Go to Anything which autocompletes you to any file, type, member rapidly. Awesome except when I forget – or when I’m not quite sure of the name of what I’m looking for. Project navigation is still important.

Sometimes while working on a project I seem to have 30 or more files open and trying to locate another new file to open in the solution often ends up being a mental exercise – “where did I put that thing?” It’s those little hesitations that tend to get in the way of workflow frequently.

To make things worse most NuGet packages for client side frameworks and scripts, dump stuff into folders that I generally don’t use. I’ve never been a fan of the ‘Content’ folder in MVC which is just an empty layer that doesn’t serve much of a purpose. It’s usually the first thing I nuke in every MVC project. To me the project root is where the actual content for a site goes – is there really a need to add another folder to force another path into every resource you use? It’s ugly and also inefficient as it adds additional bytes to every resource link you embed into a page.

Alternatives

I’ve been playing around with different folder layouts recently and found that moving my cheese around has actually made project navigation much easier. In this post I show a couple of things I’ve found useful and maybe you find some of these useful as well or at least get some ideas what can be changed to provide better project flow.

The first thing I’ve been doing is add a root Code folder and putting all server code into that. I’m a big fan of treating the Web project root folder as my Web root folder so all content comes from the root without unneeded nesting like the Content folder. By moving all server code out of the root tree (except for Code) the root tree becomes a lot cleaner immediately as you remove Controllers, App_Start, Models etc. and move them underneath Code. Yes this adds another folder level for server code, but it leaves only code related things in one place that’s easier to jump back and forth in. Additionally I find myself doing a lot less with server side code these days, more with client side code so I want the server code separated from that.

The root folder itself then serves as the root content folder. Specifically I have the Views folder below it, as well as the Css and Scripts folders which serve to hold only common libraries and global CSS and Scripts code. These days of building SPA style application, I also tend to have an App folder there where I keep my application specific JavaScript files, as well as HTML View templates for client SPA apps like Angular.

Here’s an example of what this looks like in a relatively small project:

Project Layout

The goal is to keep things that are related together, so I don’t end up jumping around so much in the solution to get to specific project items. The Code folder may irk some of you and hark back to the days of the App_Code folder in non Web-Application projects, but these days I find myself messing with a lot less server side code and much more with client side files – HTML, CSS and JavaScript. Generally I work on a single controller at a time – once that’s open it’s open that’s typically the only server code I work with regularily. Business logic lives in another project altogether, so other than the controller and maybe ViewModels there’s not a lot of code being accessed in the Code folder. So throwing that off the root and isolating seems like an easy win.

Nesting Page specific content

In a lot of my existing applications that are pure server side MVC application perhaps with some JavaScript associated with them , I tend to have page level javascript and css files. For these types of pages I actually prefer the local files stored in the same folder as the parent view. So typically I have a .css and .js files with the same name as the view in the same folder.

This looks something like this:

filenesting

In order for this to work you have to also make a configuration change inside of the /Views/web.config file, as the Views folder is blocked with the BlockViewHandler that prohibits access to content from that folder. It’s easy to fix by changing the path from * to *.cshtml or *.vbhtml so that view retrieval is blocked:

<system.webServer> <handlers> <remove name="BlockViewHandler"/> <add name="BlockViewHandler" path="*.cshtml" verb="*"
preCondition="integratedMode"
type="System.Web.HttpNotFoundHandler" /> </handlers> </system.webServer>

With this in place, from inside of your Views you can then reference those same resources like this:

<link href="~/Views/Admin/QuizPrognosisItems.css" rel="stylesheet" />

and

<script src="~/Views/Admin/QuizPrognosisItems.js"></script>

which works fine. JavaScript and CSS files in the Views folder deploy just like the .cshtml files do and can be referenced from this folder as well.

Making this happen is not really as straightforward as it should be with just Visual Studio unfortunately, as there’s no easy way to get the file nesting from the VS IDE directly (you have to modify the .csproj file).

However, Mads Kristensen has a nice Visual Studio Add-in that provides file nesting via a short cut menu option. Using this you can select each of the ‘child’ files and then nest them under a parent file. In the case above I select the .js and .css files and nest them underneath the .cshtml view.

FileNesting[1]

I was even toying with the idea of throwing the controller.cs files into the Views folder, but that’s maybe going a little too far :-) It would work however as Visual Studio doesn’t publish .cs files and the compiler doesn’t care where the files live. There are lots of options and if you think that would make life easier it’s another option to help group related things together.

Are there any downside to this? Possibly – if you’re using automated minification/packaging tools like ASP.NET Bundling or Grunt/Gulp with Uglify, it becomes a little harder to group script and css files for minification as you may end up looking in multiple folders instead of a single folder. But – again that’s a one time configuration step that’s easily handled and much less intrusive then constantly having to search for files in your project.

Client Side Folders

The particular project shown above in the screen shots above is a traditional server side ASP.NET MVC application with most content rendered into server side Razor pages. There’s a fair amount of client side stuff happening on these pages as well – specifically several of these pages are self contained single page Angular applications that deal with 1 or maybe 2 separate views and the layout I’ve shown above really focuses on the server side aspect where there are Razor views with related script and css resources.

For applications that are more client centric and have a lot more script and HTML template based content I tend to use the same layout for the server components, but the client side code can often be broken out differently.

In SPA type applications I tend to follow the App folder approach where all the application pieces that make the SPA applications end up below the App folder.

Here’s what that looks like for me – here this is an AngularJs project:

ScriptProject

In this case the App folder holds both the application specific js files, and the partial HTML views that get loaded into this single SPA page application.

In this particular Angular SPA application that has controllers linked to particular partial views, I prefer to keep the script files that are associated with the views – Angular Js Controllers in this case – with the actual partials. Again I like the proximity of the view with the main code associated with the view, because 90% of the UI application code that gets written is handled between these two files.

This approach works well, but only if controllers are fairly closely aligned with the partials. If you have many smaller sub-controllers or lots of directives where the alignment between views and code is more segmented this approach starts falling apart and you’ll probably be better off with separate folders in js folder. Following Angular conventions you’d have controllers/directives/services etc. folders.

Please note that I’m not saying any of these ways are right or wrong  – this is just what has worked for me and why!

Skipping Project Navigation altogether with Resharper

I’ve talked a bit about project navigation in the project tree, which is a common way to navigate and which we all use at least some of the time, but if you use a tool like Resharper – which has Ctrl-T to jump to anything, you can quickly navigate with a shortcut key and autocomplete search.

Here’s what Resharper’s jump to anything looks like:

ResharperGotoAnything

Resharper’s Goto Anything box lets you type and quick search over files, classes and members of the entire solution which is a very fast and powerful way to find what you’re looking for in your project, by passing the solution explorer altogether. As long as you remember to use (which I sometimes don’t) and you know what you’re looking for it’s by far the quickest way to find things in a project. It’s a shame that this sort of a simple search interface isn’t part of the native Visual Studio IDE.

Work how you like to work

Ultimately it all comes down to workflow and how you like to work, and what makes *you* more productive. Following pre-defined patterns is great for consistency, as long as they don’t get in the way you work. A lot of the default folder structures in Visual Studio for ASP.NET MVC were defined when things were done differently. These days we’re dealing with a lot more diverse project content than when ASP.NET MVC was originally introduced and project organization definitely is something that can get in the way if it doesn’t fit your workflow. So take a look and see what works well and what might benefit from organizing files differently.

As so many things with ASP.NET, as things evolve and tend to get more complex I’ve found that I end up fighting some of the conventions. The good news is that you don’t have to follow the conventions and you have the freedom to do just about anything that works for you.

Even though what I’ve shown here diverges from conventions, I don’t think anybody would stumble over these relatively minor changes and not immediately figure out where things live, even in larger projects. But nevertheless think long and hard before breaking those conventions – if there isn’t a good reason to break them or the changes don’t provide improved workflow then it’s not worth it. Break the rules, but only if there’s a quantifiable benefit.

You may not agree with how I’ve chosen to divert from the standard project structures in this article, but maybe it gives you some ideas of how you can mix things up to make your existing project flow a little nicer and make it easier to navigate for your environment.

© Rick Strahl, West Wind Technologies, 2005-2014
Posted in ASP.NET  MVC  

by Rick Strahl at July 01, 2014 05:28 AM

Alex Feldstein

Photo of the Day


Tri-color heron (Juvenile)
Wakodahatchee wetlands
Delray, Florida

by Alex Feldstein (noreply@blogger.com) at July 01, 2014 05:00 AM

June 30, 2014

Chris Sainty

Veil - Getting Started Standalone

In addition to integrating with Nancy, Veil can be used by itself in any project for advanced text templating.

To get started you simply need to install one of the Veil parsers for the syntax you prefer.

  • SuperSimple: Install-Package Veil.SuperSimple
  • Handlebars: Install-Package Veil.Handlebars

June 30, 2014 10:00 PM

Calvin Hsia's WebLog

WPF immediate mode graphics

Windows Presentation Foundation (WPF) is a retained mode graphics system (see Retained Mode Versus Immediate Mode ). That means when you write code to draw something, you’re actually declaring a set of graphics objects (like lines or shapes) to use. The...(read more)

by CalvinH at June 30, 2014 06:16 PM

FoxProWiki

VFPSetClasslib

SET CLASSLIB opens one or more class libraries to provide access to a collection of classes. Please discuss the pros/cons to having a one-class-per-classlib (VCX) design versus a multiple-class-per-classlib design.

June 30, 2014 03:50 PM

FoxCentral News

Southwest Fox/Xbase++ 2014: Super-Saver expires today

June 30th is today and we thought we would remind you in case you forgot: We still need people to register to make Southwest Fox happen. Super-saver discount is $125 and you can select a free pre-conference session. Head over to the registration Web site today: http://geekgatherings.com/Registration.

by Southwest Fox Conference at June 30, 2014 03:49 PM

CULLY Technologies, LLC

ShowURL on Mac in Xojo

I’m working on some cross-platform application development in Xojo. I was pumping out some data to HTML for reporting purposes, and using the host machine to display that HTML. It was working like a charm on Linux, however I wasn’t getting the HTML to display on the Mac. Well, I wasn’t reading far enough down the documentation. Here’s what you need to do on the Mac. This works on Linux as well so one solution works for both.


' ... I'm assuming that you've created your HTML file already ...
DIM oMyDir AS FolderItem = SpecialFolder.ApplicationData.Child("MyApp")
DIM oMyHTMLFile AS FolderItem = oMyDir.Child("MyHtmlFile.html")
ShowURL( oMyHTMLFile.URLPath ) '<-- Shows HTML file in the comp's browser

I was trying all of the other path types and it worked on Linux, but not on Mac: AbsolutePath, ShellPath, NativePath. All of these don't work. Use the URLPath.

by kcully at June 30, 2014 12:48 PM

Rahul Desai's Blog

Dynamics CRM 2011 Recover Deleted Active Directory User Accounts

 

Some times user accounts in CRM get out-of-synch with Active Directory and this could be related with:

  • AD account deleted and re-created
  • AD restores
  • Or anything else that would change the objectGUID in AD

Dynamics CRM 2011 Recover Deleted Active Directory User Accounts – TechNet Articles – United States (English) – TechNet Wiki

by Rahul Desai at June 30, 2014 06:20 AM

Alex Feldstein

Photo of the Day


Tri-color heron (Juvenile)
Wakodahatchee wetlands
Delray, Florida

by Alex Feldstein (noreply@blogger.com) at June 30, 2014 05:00 AM

June 29, 2014

Articles

MSCC: Purpose and benefits of Version Control Systems (VCS)

Logo of the Mauritius Software Craftsmanship CommunityUnfortunately, there was no monthly meetup during May. Which means that it was even more important and interesting to go forward with a great topic for this month. Earlier this year I already spoke to Nayar Joolfoo about doing a presentation on version control systems (VCS), and he gladly agreed since then. It was just about finding the right date for the action. Furthermore, it was also a great coincidence that Avinash Meetoo announced on social media networks that Knowledge 7 is about to have a new training on "Effective git" - which correlates to a book title Avinash is currently working on - all the best with your approach on this and reach out to our MSCC craftsmen for recessions.

Once again a big Thank you to Orange Ebene Accelerator on providing the venue for us, and the MSCC members involved on securing the time slot for our event. Unfortunately, it's kind of tough to get an early confirmation for our meetups these days. I'll keep you posted on that one as there are some interesting and exciting options coming up soon.

Okay, let's talk about the meeting and version control systems again. As usual, I'm going to put my first impression of the meetup:

"Absolutely great topic, questions and discussions on version control systems, like git or VSO. I was also highly pleased by the number of first timers and female IT geeks. Hopefully, we will be able to keep this trend for future get-togethers."

And I really have to emphasise the amount of fresh blood coming to our gathering. Also, during the initial phase it was surprising to see that exactly those first-timers, most of them students at various campuses here on the island, had absolutely no idea about version control systems. More about further down...

Reactions of other attendees

If I counted correctly, we had a total of 17 attendees this month, and I'd like to give you feedback from some of them:

"Inspiring. Helped me understand more about GIT." -- Sean on event comments

"Joined the meetup today with literally no idea what is a version control system. I have several reasons why I should be starting to use VCS as from NOW in my projects. Thanks Nayar, Jochen and other participants :)" -- Yudish on event comments

"Was present today and I'm very satisfied.I was not aware if there was a such tool like git available. Thanks to those who contributed for this meetup.
It was great. Learned a lot from this meetup!!" -- Leonardo on event comments

"Seriously, I can see how it’s going to ease my task and help me save time. Gone are the issues with files backups.  And since I’ll be doing my dissertation this year, using Git would help me a lot for my backups and I’m grateful to Nayar for the great explanation." -- Swan-Iyah on MSCC meetup : Version Controls

Hopefully, I'll be able to get some other sources - personal blogs preferred - on our meeting.

Geeks, thank you so much for those encouraging comments. It's really great to experience that we, all members of the MSCC, are doing the right thing to get more IT information out, and to help each other to improve and evolve in our professional careers.

Our agenda of the day

Honestly, we had a bumpy start...

First, I was battling a little bit with the movable room divider in order to maximize the space. I mean, we had 24 RSVPs and usually there might additional people coming along. Then, for what ever reason, we were facing power outages - actually twice in short periods. Not too good for the projector after all, but hey it went smooth for the rest of the time being. And last but not least... our first speaker Nayar got stuck somewhere on the road. ;-)

Anyway, not a real show-stopper and we used the time until Nayar's arrival to introduce ourselves a little bit. It is always important for me to get to know the "newbies" a little bit, and as a result we had lots of students of university - first year, second year and recent graduates - among them. Surprisingly, none of them was ever in contact with version control systems at all. I mean, this is a shocking discovery!

Similar to the ability of touch-typing I'd say that being able to use (and master) any kind of version control system is compulsory in any job in the IT industry. Seriously, I'm wondering what is being taught during the classes on the campus. All of them have to work on semester assessments or final projects, even in small teams of 2-4 people. That's the perfect occasion to get started with VCS.

Already in this phase, we had great input from more experienced VCS users, like Sean, Avinash and myself.

git - a modern approach to VCS - Nayar

What a tour!

Nayar gave us the full round of git from start to finish, even touching some more advanced techniques. First, he started to explain about the importance of version control systems as an essential tool for software developers, even working alone on a project, and the ability to have a kind of "time machine" that allows you to inspect and revert to a previous version of source code at any time. Then he showed how easy it is to install git on an Ubuntu based system but also mentioned that git is literally available for any operating system, like Windows, Mac OS X and of course other Linux distributions.

Next, he showed us how to set the initial configuration values of user name and email address which simplifies the daily usage of the git client while working with your repositories. Then he initialised and added a new repository for some local development of a blogging software. All commands were done using the command line interface (CLI) so that they can be repeated on any system as reference. The syntax and the procedure is always the same, and Nayar clearly mentioned this to the attendees.

Now, having a git repository in place it was about time to work on some "important" changes on the blogging software - just for the sake of demonstrating the ease of use and power of git. One interesting question came very early: "How many commands do we have to learn? It looks quite difficult at the moment" - Well, rest assured that during daily development circles you will need less than 10 git commands on a regular base: git add, commit, push, pull, checkout, and merge

And Nayar demo'd all of them. Much to the delight of everyone he also showed gitk which is the git repository browser. It's an UI tool to display changes in a repository or a selected set of commits. This includes visualizing the commit graph, showing information related to each commit, and the files in the trees of each revision.

Using gitk to display and browse information of a local git repository.
Using gitk to display and browse information of a local git repository

And last but not least, we took advantage of the internet connectivity and reached out to various online portals offering git hosting for free. Nayar showed us how to push the local repository into a remote system on github. Showing the web-based git browser and history handling, and then also explained and demo'd on how to connect to existing online repositories in order to get access to either your own source code or other people's open source projects.

Next to github, we also spoke about bitbucket and gitlab as potential online platforms for your projects. Have a look at the conditions and details about their free service packages and what you can get additionally as a paying customer. Usually, you already get a lot of services for up to five users for free but there might be other important aspects that might have an impact on your decision. Anyways, moving git-based repositories between systems is a piece of cake, and changing online platforms is possible at any stage of your development.

Visual Studio Online (VSO) - Jochen

Well, Nayar literally covered all elements of working with git during his session, including the use of external online platforms. So, what would be the advantage of talking about Visual Studio Online (VSO)? First of all, VSO is "just another" online platform for hosting and managing git repositories on remote systems, equivalent to github, bitbucket, or any other web site. At the moment (of writing), Microsoft also provides a free package of up to five users / developers on a git repository but there is more in that package. Of course, it is related to software development on the Windows systems and the bonds are tightened towards the use of Visual Studio but out of experience you are absolutely not restricted to that. Connecting a Linux or Mac OS X machine with a git client or an integrated development environment (IDE) like Eclipse or Xcode works as smooth as expected.

So, why should one opt in for VSO? Well, one of the main aspects that I would like to mention here is that VSO integrates the Application Life Cycle Methodology (ALM) of Microsoft in their platform. Meaning that you get agile project management with Backlogs, Sprints, Burn-down charts as well as the ability to track tasks, bug reports and work items next to collaborative team chats. It's the whole package of agile development you'll get.

And, something I mentioned briefly during the begin of our meeting, VSO gives you the possibility of an automated continuous integrated (CI) process which builds and can run tests of your source code after each commit of changes. Having a proper CI strategy is also part of the Clean Code Developer practices - on Level Green actually -, and not only simplifies your life as a software developer but also reduces the sources of potential errors.

Seamless integration and automated deployment between Microsoft Azure Web Sites and git repository
Seamless integration and automated deployment between Microsoft Azure Web Sites and git repository

But my favourite feature is the seamless continuous deployment to Microsoft Azure. Especially, while working on web projects it's absolutely astounishing that as soon as you commit your chances it just takes a couple of seconds until your modifications are deployed and available on your Azure-hosted web sites.

Upcoming Events and networking

Due to the adjusted times, everybody was kind of hungry and we didn't follow up on networking or upcoming events - very unfortunate to my opinion and this will have an impact on future planning of our meetups. Because I rather would like to see more conversations during and at the end of our meetings than everyone just packing their laptops, bags and accessories and rush off to grab some food.

I was hoping to get some information regarding this year's Code Challenge - supposedly to be organised during July? Maybe someone could leave a comment on that - but I couldn't get any updates. Well, I'll keep digging...

In case that you would like to get more into git and how to use it effectively, please check out Knowledge 7's upcoming course on "Effective git". Thanks Avinash for your vital input into today's conversation and I'm looking forward to get a grip on your book title very soon.

My resume of the day

Do not work in IT without any kind of version control system!

Seriously, without a VCS in place you're doing it wrong. It's like driving a car without seat belts attached or riding your bike without safety helmet. You don't do that! End of discussion. ;-)

Nowadays, having access to free (as in cost) tools to install on your machine and numerous online platforms to host your source code for free for up to five users it's a no-brainer to get yourself familiar with VCS. Today's sessions gave a good overview on how to start using git and how to connect to various remote services like github or VSO.

by Jochen Kirstaetter (jochen@kirstaetter.name) at June 29, 2014 06:02 AM

Alex Feldstein

Photo of the Day


Tri-color heron (Juvenile)
Wakodahatchee wetlands
Delray, Florida

by Alex Feldstein (noreply@blogger.com) at June 29, 2014 05:00 AM

Craig Bailey

When Serious Issues are Co-opted for Marketing

It’s interesting to see how Verizon (a US telco*) have picked on a topical (and worthy) issue and co-opted it for their own marketing**. And gets positive press about it from the likes of HuffPo.

A little while ago I would have been really cynical about these kinds of ads that basically take ‘serious issue’ topics and manipulate them into feel good, positive-by-association messaging for companies. But since we know that a large percentage of consumers actually believe numbers and stats in advertising are in fact real, this kind of ad is actually good from an awareness point of view. The danger though is that others will dismiss the topic, purely because it is being presented by a manipulative advertising campaign…

I’d love to see what Russel and Todd would say about this if The Gruen Transfer was still going.

* Telcos, as an industry are generally disliked by consumers – see this ACSI report (especially p14, note: need to fill in a form to access it)

** It’s not a new approach by any means – just think of the Dove commercials

The post When Serious Issues are Co-opted for Marketing appeared first on Craig Bailey.

by Craig Bailey at June 29, 2014 03:08 AM

June 28, 2014

Chris Sainty

Veil - Getting Started With Nancy

Nancy is a great framework for building websites and it has been an important goal for Veil to integrate seamlessly in to your Nancy projects.

To get started you will first need to install Veil's view engine wrapper for Nancy.

Install-Package Nancy.ViewEngines.Veil

June 28, 2014 10:00 PM

Alex Feldstein

Photo of the Day


Tri-color heron (Juvenile)
Wakodahatchee wetlands
Delray, Florida

by Alex Feldstein (noreply@blogger.com) at June 28, 2014 05:00 AM

June 27, 2014

CULLY Technologies, LLC

Xojo’s (not so) secret weapon

I’m working on a cross-platform application that I want to make available on all desktop platforms: Windows, Mac and Linux. I develop on Linux and everything is, naturally, working. It’s working because I’ve worked through the issues on the development platform during the development cycle. I get to the point where I build the application for all platforms which involves checking a few boxes in Xojo and hitting the ‘Build’ button. Bing, bang, boom it completes in just a couple of seconds.

My sons have ‘stolen’ my Mac Mini so I have a friend test out my application on a Mac and … it has trouble creating and writing to a settings database. {Pause. Eye blink. More eye blinking.} What is a developer to do? Turn to the Xojo Remote Debugger, that’s what!

I get my Mac back and plug it in. I install the Xojo and with it comes the Remote Debugger Desktop app. I start it running.

On my development machine (a LinuxMint VM inside of my Linux laptop, btw.) I start Xojo, load my project, and then choose ‘Project\Run Remotely\Setup…’ One adjustment I needed to make was to make sure that my VM was on the same subnet as the Remote Mac pc was. If they’re on a different subnet, they won’t talk without additional configuration. I needed to make sure that my Linux box wasn’t blocking port 44553 which is the default port that the Remote Debugger communicates on. Once you see the remote machine listed in the “Remote Debug Hosts” list, we’re ready to actually start debugging.

To run the application remotely, but potentially debug the code, choose the menu option of “Project\Run Remotely\{Remote Machine Name}”. You’ll see it assembling code, then connecting to the remote computer and … you should see your application screen appear … or an error message if you develop anything like I do! LOL.

To remotely debug, operate the user interface on the remote machine. Feel free set set break points in your code. If you hit one, the Xojo IDE on your development machine will stop on that line for you to examine values and properties.

This is the killer feature: Remote debugging.

Without the remote debugger, it would be much more difficult to create working cross-platform applications. I guess you could create a development environment on each platform and use source-code control to push and pull code around but … what a pain. Having a development environment on each platform *might* have its advantages, but the Xojo Remote Debugger is certainly a welcome benefit to developing in Xojo. I feel lucky that someone at Xojo at some point in the past came up with this great development tool. I’m well on my way to make the *small* changes to my code to get it to work on Linux, now Mac, next Windows. For the most part Xojo takes the pain of cross-platform development off my shoulders, but they’ll always be some difference between the platforms. The Remote Debugger is essential in these cases.

by kcully at June 27, 2014 01:03 PM