Planet FoxPro

October 31, 2014

Alex Feldstein

October 30, 2014

CULLY Technologies, LLC

How to reduce your Internet bandwidth usage

Like many people that I’ve been hearing about recently, we got hit by some Internet usage overages. Currently, I’m on the Blast Plus package with Comcast. This provides for 300GB per month. That sounds like a lot, until you think of what streaming video adds up to. Comcast is charging $10 for an extra 50GB of transfer per month. We’ve hit that a couple of times this year. It’s time to tighten the belt to help prevent these overages.

The first step is being able to measure and examine the problem. Your network router within your house typically will allow you to produce reports. I reach my router by logging in through the browser interface at the address http://192.168.1.1. Your setup and router may be different so check your router’s manual. My router allows for three reports in the Traffic Manager: Real-time, Last 24-hours, and the Daily report that produces a summary per day. Unfortunately, the daily report has a bug in it that the totals are incorrect and are over-reported, but the 24-hour report is very useful to me.

Internet_Bandwidth_Usage_Stats

Don’t worry about the spikes in traffic. Watch for long, heavy usage. That’s where video is buffering and is the source of the problem. On a side note, be on the lookout for odd traffic at odd times. That may be an indicator that some of your machines may be compromised. By turning each device in turn off at night, leaving certain systems on, it may help narrow down and identify the “heavy usage” culprit. Note that certain systems may do updates in the middle of the night.

The biggest bandwidth hogs are video. Period. It’s not the text content of websites. It’s not even pictures/images. Video is the bandwidth killer. The key to lowering Internet usage is to lower the video usage. We can either [A] not watch videos or [B] watch the videos in a lower resolution. Let’s go with option [B]!

Our two biggest video services are [1] YouTube and [2] Netflix. You may also use Amazon Prime, and other services.

To lower your YouTube usage, log in to your account, and then click on your icon which is currently in the upper right of the browser page. Then choose the “Account Settings” menu option. Under the “Playback” option, choose the “I have a slow connection. Never play higher-quality video.” Then press the “Save” button to save your changes. I barely notice the difference in quality. Often times, I will hand-lower the resolution from 480p to 360p to further save on bandwidth. Make sure to do this for each YouTube account that you have in your house: Spouse, kids, dog, etc. They each need to make this change.
Internet_Bandwidth_YouTube

Internet_Bandwidth_Netflix01Next up is the settings of Netflix. Even if you use a smart TV, a Roku, or Kindle Fire TV, you change these settings from their website. Log in to Netflix, and then click on your icon in the upper right and go to “Your Account”. Click on the “Playback settings” menu option. Then choose the “Low” option under “Data Usage per Screen”. Click the “Save” button to save your settings.
Internet_Bandwidth_Netflix02

What are the results? We went from using 10-17GB per day, to about 4-6GB per day. I barely notice the difference in video quality. Of course, I’m not really picky with my video quality. Your experience may differ.

It doesn’t hurt to have the DVD subscription of movies as well. That gives your Netflix viewing a break on certain nights. If you’re paying a lot in data overages, the DVD subscription may pay for itself.

I hope this helps. Let me know if I’ve missed any aspects on big savings in bandwidth usage.

by kcully at October 30, 2014 07:30 PM

Calvin Hsia's WebLog

Export your data to Excel using CSV and all data appears in one column

In many prior posts, I export data to Excel via writing to a TEMP file and just starting that TEMP file, which starts Excel, if Excel is on the machine.   var tempFileName = System.IO. Path .ChangeExtension( System.IO. Path .GetTempFileName(),...(read more)

by CalvinH at October 30, 2014 06:25 PM

Alex Feldstein

October 29, 2014

Alex Feldstein

October 28, 2014

Alex Feldstein

October 27, 2014

Alex Feldstein

Articles

MSCC: Shellshock Survival Guide

Logo of the Mauritius Software Craftsmanship CommunityThe media coverage related to vulnerabilities in Linux has been quite immense lately.

After Heartbleed during the early months of 2014, we had a second major wave of problems based on a very old "feature" in the commonly used bash - Bourne Again Shell - on Linux- and BSD-based systems including Mac OS X. Well, there has been quite some activities and controversial discussion around this feature but it was obvious that it could be exploited and therefore a fix had to be done. Taking into consideration that there are literally millions of systems connected on the internet which are based on a Linux or BSD system, this obviously isn't a quick and easy task to improve.

This month's meetup was organised in a joint-venture between the MSCC, the LUGM, the UoM CC and we settled down at the University of Mauritius. Thanks to the organisers and it was again a great experience to be on the campus of Mauritius.

Shellshock: Survival Guide

The event was originally created on Facebook, and at the MSCC we simply picked it in order to attract more people for the meeting. Well, despite the hundreds of "Event Go'ers" on Facebook we were roughly 24 people that came together. The provided room 1.14 was big enough for everyone, and eventually we might be able to use this space on a regular base.. to be confirmed. ;-)

My point of view

Well... it's best to simply voice it out:

"Despite the technical background of Shellshock there was simply too much distraction and too many discussions going on during the meeting. I found it kind of chaotic and non-informative...

Somehow I expected a bit more regarding immediate corrections, advice on how to write better scripts and eventually something related to hardening an OS regarding bash, scripting languages and user-space applications on various Linux distributions, and Mac OS X."

Quite frankly, I was kind of disappointed by the lack of practical guidance. I mean... "survival guide" would implicate that you'll learn something to take home or back to the office, and to apply to your web server or office systems, or that you could integrate in your coding efforts in order to improve your skills and to reduce the risk of a system exploitation, don't you agree?

Actually, I thought about my statement for some time, but it didn't come out better than this. Yes, I learned about the implications why shellshock is dangerous and that there are patched versions of all major distributions available but apart from that.... I didn't learn anything new in order to be better aware of such situation or to avoid it completely.

MSCC meetup: Discussion about the bash shellshock vulnerability and practical advice to secure your systems.
MSCC meetup: Discussion about the bash shellshock vulnerability and practical advice to secure your systems.

Reactions of other attendees

Some other bloggers already put their thoughts online... 

Both very informative regarding the events as they happened but same like own observation there's clearly a lack of guidance after all.

Upcoming Events and networking

We are closing in on year's end and the advertisement for End of Year party venues is increasing. Well' at the MSCC we are already planning our second Christmas activities, too. What are the upcoming events here in Mauritius? So far, we have the following ones (incomplete list as usual) in chronological order:

Hopefully, there will be more announcements during the next couple of weeks and months. If you know about any other event, like a bootcamp, a code challenge or hackathon here in Mauritius, please drop me a note in the comment section below this article. Thanks!

My resume of the day

Discussed, dusted and off to new discoveries!

This month's event was interesting and although there was no actual "survival guide" it is good to see that the awareness in Mauritian IT is growing, especially among students. Nowadays, you can't effort to put on blinders and pretend that your operating system is all safe and secure. It's your continuous responsibility to follow security advisory bulletins and to improve your skills in IT - and it doesn't matter whether you're a system administrator, a software developer, or a passionate web developer. With the increasing amount of Internet of Things (IoT) security, safety and privacy is an ongoing process. Don't just kick back and relax, the next big bang is lurking around the corner - for sure... ;-)

by Jochen Kirstaetter (jochen@kirstaetter.name) at October 27, 2014 08:11 AM

Sandstorm's Blog (Home of ssClasses)

Right-Click on TreeView

I created last time a "My Favorite" popup menu for my ERP app.  This allows my users to easily jump from one module to another without the need to close what they are working on.  The hotkey is F5.   Each of my users have their own My Favorite as well so they can control themselves what they wanted to be there (those modules they have access to).



I used Ctrl+F to add or remove items on the Favorite Menu utilizing the Selected/Highlighted node from the ActiveX TreeView Control (MSComctlLib.TreeCtrl.2) as when I look for a right-click event back then, I cannot find one.  And I do not want to waste my time on finding that missing Right-Click event as all I wanted is a quick fix for my need.

However, said approach lacks appeal so I said to myself today as I am a bit free to do an experiment, I will try to make that Right-Click function.  And it appears it is very easy enough to achieve that.  The right-click can be done on MouseDown event using the first parameter button.

When you left click on the treeview, the button parameter receives a value of 1 and when your right-click, it receives 2, while a click on the mouse scroll wheel triggers a value of 4. So a simple IF condition was able to produce what I need with something like this:

MouseDown Event
*** ActiveX Control Event ***
LPARAMETERS button, shift, x, y
IF button = 2
   PopFave(this.selectedItem.Text)
ENDIF




And there you have it, just in case you are wondering how to trigger a right-click as well on your treeview control.  Cheers!


by Jun Tangunan (noreply@blogger.com) at October 27, 2014 06:18 AM

Alex Feldstein

October 26, 2014

Rahul Desai's Blog

Alex Feldstein

October 25, 2014

Alex Feldstein

Rick Strahl's Web Log

AngularJs and Promises with the $http Service

When using the $http service with Angular I’ve often wondered why the $http service opts to use a custom Promise instance that has extension methods for .success() and .error(). rather than relying on the more standard .then() function to handle the callbacks. Traditional promises (using the $q Service in Angular) have a .then() function to provide a continuation on success or failure, and .then() receives parameters for a success and failure callback. The various $http.XXXX functions however, typically use the .success() and .error() functions to handle callbacks. Underneath the $http callbacks there is still a $q Promise, but the extension functions abstract away some of the ugliness that is internal to the $http service.

This might explain that when looking at samples of Angular code that use the $http service inside of custom services, I often see code that creates a new wrapper Promise and returns that back to the caller rather than the original $http Promise.

The idea is simple enough – you want to create a service that captures the data and stores is and then notify the controller that the data has changed or refreshed. Let’s look at a few different approaches to help us understand how the $http service works with its custom promises.

Let’s look at a simple example (also on Plunker). Assume you have a small HTML block with an ng-repeat that displays some data:

<div class="container" ng-controller="albumsController as view" style="padding: 20px;">
    <ul class="list-group">
        <li class="list-group-item" ng-repeat="album in view.albums">
        {{album.albumName}} <i class="">{{album.year}}</i>
    </ul>
</div> 

You then implement a service to get the data via $http and a controller that can use the data that the service provides.

The Verbose Way

Let’s start with the more complex verbose way of creating an extra promise which seems to be a commonly used pattern I’ve seen in a number of examples (including a number of online training courses). I want to start with this because it nicely describes the common usage pattern for creating custom Promises in JavaScript.

Here’s what this looks like in an factory Angular service implementation:

app.factory('albumService', ['$http', '$q'],
    function albumService($http, $q) {
        // interface
        var service = {
            albums: [],
            getAlbums: getAlbums
        };
        return service;

        // implementation
        function getAlbums() {
            var def = $q.defer();

            return $http.get("./albums.ms")
                .success(function(data) {
                    service.albums = data;
                    def.resolve(data);
                })
                .error(function() {
                    def.reject("Failed to get albums");
                });
            return def.promise;
        }
    });

The code in getAlbums() creates an initial Deferred object using $q.defer(). Then the $http.get() is called and when the initial $http callback returns either .resolve() or .reject() is called on the deferred instance. When – later on in the future – the HTTP call returns it triggers the Deferred to fire its continuation callbacks to the success or failure operations on whoever is listening to the promise on the promise .then()  function. Before the callback comes back though the at this point unresolved Promise is first returned back to the caller which in this case is the controller  that’s calling this service function.

The calling controller can now capture the service result by attaching to the resulting promise like this:

app.controller('albumsController', [
    '$scope', 'albumService',
    function albumsController($scope, albumService) {
        var vm = this;
        vm.albums = [];

        vm.getAlbums = function() {
            albumService.getAlbums()
                .then(function(albums) {
                    vm.albums = albums;
                    console.log('albums returned to controller.');
                },
                function(data) {
                    console.log('albums retrieval failed.')
                });
        };
        
        vm.getAlbums();
    }
]);

Now when the HTTP call succeeds (or fails), it come back to the $http.get().success or error functions which in turn triggers the wrapper Promise, which then in turn fires the .then() in the controller with either result data (success) or an http error object (error).

When you run this, the Controller’s view .albums property is updated, which is in turn causes the list of albums to render in the browser.

Sweet it works. But – the use of the extra deferred is code that you can do without in most cases.

$http functions already return Promises

The $http functions  already return a Promise object themselves. This means there’s really very little need to create a new deferred and pass the associated promise back, much less having to handle the resolving and rejecting code as part of your service logic. Using the extra Promise to me would make sense only if you actually need to return something different than what the $http call is returning and you can’t chain the promise.

Promises can be chained, meaning you can have multiple listeners on a single Promise. So the service is one listener as it handles its .success and .error calls, but you can also pass that promise back to the caller and it can also receive a callback on that same Promise – after the service callback has fired.

Using the raw $http Promise as a result, the previous service getAlbums() function could be re-written a bit simpler like this:

function getAlbumsSimple() {
    return $http.get("albums.js")
        .success(function(data) {
            service.albums = data;
        });
}

This code simply captures the data from the service which is the albums JSON collection and assigns it to the service properties. The actual result from the call is a Promise instance and that is returned. Notice that the service here doesn’t handle any errors – that’s actually deferred to the client which may have to display some error information in the UI. If you wanted to pre-process error information you’d implement the error handler here and set something like an object on the service.

The controller can now consume this service method simply like this:

vm.getAlbumsSimple = function() {
    albumService.getAlbumsSimple()
        .success(function(albums) {            
            vm.albums = albums;
            console.log('albums returned to controller.', vm.albums);
        })
        .error(function() {
            console.log('albums retrieval failed.');
        });
};

using the same familiar .success() and .error() functions that are used on the original $http functions.

The code is similar to the original .then() Controller example, except that you are using .success() and .error() instead of .then(). This provides the albums collection to the .success() callback and we our albums assigned it works just fine.

This works because promises can be chained and have multiple listeners. Promises are guaranteed that the callbacks are called in the order that they are attached and so the service function gets the first crack,  and then the controller function gets called after that. Both get notified and both can now respond off the single Promise instance.

However, the downside of this approach is that you have to know that the service is returning you an $http promise that has .success() and .error() functions which is kind of … non-standard.

What about $http.XXX.then()?

You can also still use the .then() function on an $http.XXX function call, but the behavior changes slightly from the original call. Here’s is the same controller code written with the .then() function:

vm.getAlbumsSimple = function() {
    albumService.getAlbumsSimple()
        .then(function (httpData) {
            vm.albums = httpData.data;
        },function(httpData) {
            console.log('albums retrieval failed.');
        });
};

Unfortunately the .then() function receives a somewhat different parameter signature than the .success and .error calls do. Now a top level data object is returned in the success callback of .then() and the actual result data that is attached to the .data property of that object. The object also contains other information about the $http request.

Here’s what the actual object looks like:

$httpThen

The object holds some HTTP request data like the headers, status code and status text, which is useful on failures. And it has a .data member that holds the actual request data that you’re interested in. Hence you need to do:

vm.albums = httpData.data;

inside of the .then() callback to get at the data. This is not quite what you’d expect and I suspect one of the reasons why so many people use a wrapper promise to hide this complex object and return the data directly as part of the wrapper Promise .then() call.

$http.then() Error Callback

When using .then() with an $http call and when an error occurs you get the same object returned to you, but now the data member contains the raw HTTP response from the server rather than parsed result data. Here’s the httpData object from the error callback function parameter:

$httpThenError

It’s nice that the error callback returns the raw HTTP response – if you’re calling a REST service and it returns a 500 error result, but also a valid error JSON response you can potentially take action and parse the error into something that’s usable on the client. That’s a nice touch.

$http.error() Callback

Since we’re speaking of error callbacks lets also look at the .error() callback parameters. The error callback has a completely different parameter and object layout than the .then() error callback which is unfortunate. Here’s an example of the signature in the $http.XXX.error() function:

albumService.getAlbumsSimple()
    .success(function(httpData) {
        vm.albums = httpData.data;
    })
    .error(function (http, status, fnc, httpObj ) {        
        console.log('albums retrieval failed.',http,status,httpObj);
    });

The error callback receives parameters for the full HTTP response, a status code, and an http object that looks like this:

$httpError

Seems pretty crazy that the Angular team chose a completely different parameter signature on this error function compared to .then(). The signature here is similar to jQuery’s and I suspect that’s why this was done, although the httpObj has its own custom structure. Essentially it looks like the .then() method should be considered an internal function with .success() and .error() being the public interface. Again very unfortunate as this breaks the typical expectation of promises that use .then() for code continuation and expect a single data result object on success calls.

To be fair though the data that is contained in these result parameters is very complete and it does allow you build good error messages assuming the server returns decent error information in the right (JSON) format for you to do something with. Inconsistent - yes, but at least it’s complete!

$http Inconsistency

I find it a bit frustrating that Angular chose to create the $http methods with custom Promises that are in effect behaving differently than stock promises. By implementing .success() and .error() $http is effectively hiding some of the underlying details of the raw promise that is fired on the HTTP request. Even worse is that .then() is essentially behaving like an internal function rather than the public interface. Clearly the intent by the Angular team was to have consumers use .success() and .error() rather than .then().

This behavior provides some additional abilities to do this but it seems very counter intuitive and inconsistent. It seems like it would have been a much better choice to allow the .then() method to work the same as .success() and .error() with the same parameter signatures and adding extra parameters for the additional data that might be needed internally. Or even not have .success() and .error() altogether and have .then() just return the same values that those methods return to be consistent with the way promises are used elsewhere in Angular and in JavaScript in general.

This inconsistency and the fact that the .then() data object exposes $http transport details likely explains why so many people are wrapping the $http promises into another promise in order to provide a consistent promise result returned to the caller so that your usage of promises is consistent in your application. It just seems this would have been nice to do at the actual framework level in the first place.

Summary

Personally I’ve resigned myself to simply forwarding the $http generated Promises and using .success() and .error() at the cost of a little bit of inconsistency. At this point I have to know that this particular call in my service returns an $http promise, and that I need to call the .success() and .error() functions on it rather than .then() to handle the callbacks rather. But I still prefer that to wrapping my services with extra Promises. Regardless of where you push this behavior, somewhere in the stack you end up having this inconsistenty where the difference between $http promises and stock Promises shows up – so I might as well push it up into the application layer and save some senseless coding to hide an implementation detail.

I definitely don’t like the alternative or wrapping every service $http call into a wrapper promise since that’s tedious and painful to read in the service and adds another indirection call to every service call. But I guess it depends how much you value consistency – maybe it’s worth it to you to have the extra layer but treat every Promise in your application the same way using .then() syntax.

I’ve written this up mainly to help me remember all the different ways that results and errors are returned – I have a feeling I’ll find myself coming back to this page frequently to ‘remember’. Hopefully some of you find this useful as well.

Resources

© Rick Strahl, West Wind Technologies, 2005-2014
Posted in Angular  JavaScript  

by Rick Strahl at October 25, 2014 04:25 AM

October 24, 2014

Alex Feldstein

Sandstorm's Blog (Home of ssClasses)

Easy ways to Capture images

I have not blogged for a long time and you guys know why.  Anyway, here is something that may be useful to you so I am making this entry here.  This is about FoxForm shared to us in Foxite Forum.  This combined with other things will allow us to easily capture our form, our app's screen and the desktop itself; with only few lines of codes.  In addition, you can send those captured images as well direct to printer.  Here are the steps:


  1. You have to download FoxForm in Foxite (you have to be a member).   Go to its download section and search for FoxForm (original codes written and submitted by Eric Den Doop).  Once downloaded, extract foxform.dll to your app's main folder and register it via going to CMD, going to your main app's folder and typing regsvr32 /foxform.dll.  You have to have an administrative right to register it.
  2. Once it is registered, here are some easy ways to capture things.  Anyway, let us declare some things first like ShellExecute()


Declare Integer ShellExecute In shell32.dll ;
      INTEGER hndWin, ;
      STRING cAction, ;
      STRING cFileName, ;
      STRING cParams, ;
      STRING cDir, ;
      INTEGER nShowWin

Okay we are on the go, let us start with capturing

Capture your active form

Local loTmp, HWnd, lcFile
loTmp = CREATEOBJECT("FoxForm.Form")
HWnd = _WhToHwnd(_wfindtitl(Thisform.Caption))
lcFile = Addbs(Getenv("TMP"))+Sys(3)+".bmp"
loTmp.SaveAsBMP(HWnd,m.lcFile)
* Show the result outright
ShellExecute(0,"open",m.lcFile,"","",1)

Capture your entire's app screen

Local loTmp, HWnd, lcFile
loTmp = CREATEOBJECT("FoxForm.Form")
HWnd = _WhToHwnd(_wfindtitl(_Screen.Caption))
lcFile = Addbs(Getenv("TMP"))+Sys(3)+".bmp"
loTmp.SaveAsBMP(HWnd,m.lcFile)
ShellExecute(0,"open",m.lcFile,"","",1)  

There may be a chance that your app's screen is not on a maximized state, so if what you really wanted is to capture the entire screen without relying onto your app's WindowState?

Capture entire desktop screen

* Declare this first if not yet declared somewhere else
Declare Integer GetDesktopWindow In user32

LOCAL loTmp, hWnd, lcFile
loTmp = CREATEOBJECT("FoxForm.Form")
hWnd = GetDeskTopWindow()
lcFile = ADDBS(GETENV("TMP"))+SYS(3)+".bmp"
loTmp.SaveAsBMP(hWnd,m.lcFile)
ShellExecute(0,"open",m.lcFile,"","",1)  

And finally, here is how you can send the result direct to printer instead of into a file

Capture Desktop and Print


Declare Integer GetDesktopWindow In user32

LOCAL loTmp, hWnd
loTmp = CREATEOBJECT("FoxForm.Form")
hWnd = GetDeskTopWindow()
lotmp.SendToPrinter(hWnd)

Although we can likewise use ShellExecute() for that

Declare Integer GetDesktopWindow In user32

LOCAL loTmp, hWnd, lcFile
loTmp = CREATEOBJECT("FoxForm.Form")
hWnd = GetDeskTopWindow()
lcFile = ADDBS(GETENV("TMP"))+SYS(3)+".bmp"
loTmp.SaveAsBMP(hWnd,m.lcFile)
ShellExecute(0,"print",m.lcFile,"","",1)  


The first one though allows us to send to the printer direct while the 2nd one using ShellExecute() requires us to save the result first onto a file then later send to printer.

When I found the original codes inside Foxite, it uses Foxtools.fll.  But when I tried it here, it works without it.  In case it do not work properly on your end though, then you have to add this line on top

SET LIBRARY TO HOME()+"foxtools.fll" ADDITIVE 


Cheers!

by Jun Tangunan (noreply@blogger.com) at October 24, 2014 01:47 AM

October 23, 2014

Kevin Ragsdale's FoxPro Blog

I Need Some SUBSTR() Help, Please!

Last week, I presented a session titled, “Unicode Made Easier with SQLite” at the Southwest Fox 2014 conference. At the end of the session, an attendee asked about manipulating strings, specifically using SUBSTR() on UTF-8 encoded strings. Last night, I played around with creating my own SUBSTR() function to deal with UTF-8 encoded strings, and […]

The post I Need Some SUBSTR() Help, Please! appeared first on Kevin Ragsdale.

by Kevin Ragsdale at October 23, 2014 01:16 PM

Alex Feldstein

October 22, 2014

Alex Feldstein

The incomparable Tommy Emmanuel at Woodsongs

Tommy Emmanuel’s appearance at the Woodsongs radio show showing his amazing talent.

image

by Alex Feldstein (noreply@blogger.com) at October 22, 2014 09:18 AM

October 21, 2014

Alex Feldstein

Photo of the Day


South Pointe Park - At the tip of Miami Beach - (2-shot pano)

by Alex Feldstein (noreply@blogger.com) at October 21, 2014 05:00 AM

October 20, 2014

Rick Strahl's Web Log

A jquery-watch Plug-in for watching CSS styles and Attributes

webmonitorlogo_smallerA few years back I wrote a small jQuery plug-in used for monitoring changes to CSS styles of a DOM element. The plug-in allows for monitoring CSS styles and Attributes on an element and then getting notified if the monitored CSS style changed. This can be useful to sync up to objects or to take action when certain conditions are true after an element update.

The original plug-in worked, but was based on old APIs that have now been deprecated in some browsers. There’s always been a fallback to a very inefficient polling mechanism and that’s what unfortunately had now become the most common behavior.  Additionally some jQuery changes after 1.8.3 removed some browser detection features (don’t ask!) and that actually broke the code. In short the old plug-in – while working – was in serious need of an update. I needed to fix this plug-in for my own use as well as for reports from a few others using the code from the previous post.

As a result I spent a few hours today updating the plug-in and creating a new version of the jquery-watch plug-in. In the process I added a few features like the ability to monitor Attributes as well as CSS styles and moving the code over to a GitHub repository along with some better documentation and of course it now works with newer APIs that are supported by most browsers.

You can check out the code online at:

Here’s more about how the plug-in works and the underlying MutationObserver API it now uses.

MutationObserver to the Rescue

In the original plug-in I used DOMAttrModified and onpropertychange to detect changes. DOMAttrModified looked promising at the time and Mozilla had it implemented in Mozilla browsers. The API was supposed to become more widely used, and instead the individual DOM mutation events became marked as obsolete – it never worked in WebKit. Likewise Internet Explorer had onpropertychange forever in old versions of IE. However, with the advent of IE 9 and later onpropertychange disappeared from Standards mode and is no longer available.

Luckily though there’s now a more general purpose API using the MutationObserver object which brings together the functionality of a number of the older mutation events in a single API that can be hooked up to an element. Current versions of modern browsers all support MutationObserver – Chrome, Mozilla, IE 11 (not 10 or earlier though!), Safari and mobile Safari all work with it, which is great.

The MutationObserver API lets you monitor elements for changes on the element, in it’s body and in child elements and from my testing this interface here on both desktop and mobile devices it looks like it’s pretty efficient with events being picked instantaneously even on moderately complex pages/elements.

Here’s what the base syntax looks like to use MutationObserver:

var element = document.getElementById("Notebox");

var observer = new MutationObserver(observerChanges);
observer.observe(element, {
    attributes: true,
    subtree: opt.watchChildren,
    childList: opt.watchChildren,
    characterData: true
});

/// when you're done observing
observer.disconnect();

function observerChanges(mutationRecord, mutationObserver) {
    console.log(mutationRecord);
}

You create a MutationObserver instance and pass a callback handler function that is called when a mutation event occurs. You then call the .observe() method to actually start monitoring events. Note that you should store away the MutationObserver instance somewhere where you can access it later to call the .disconnect() method to unload the observer. This turns out is pretty important as you need to also watch for recursive events and need to unhook and rehook the observer potentially in the callback function. More on the later when I get back to the plug-in.

Note that you can specify what you want to look at. You can look at the current element’s attributes, the character content as well as the DOM subtree so you can actually detect child element changes as well. If you’re only interested in the actual element itself be sure to set childlist and subtree to false to avoid the extra overhead of receiving events for children.

The callback function receives a mutationRecord and an instance of the mutation observer itself. The mutationRecord is the interesting part as it contains information about what was modified in the element or subtree. You can receive multiple records in a single call which occurs if multiple changes are made to the same attribute or DOM operation.

Here’s what the Mutation record looks like:

ModifiedRecord

You can see that you that you get information about whether the actual element was changed via the attributeName or you check for added and removed nodes in child elements. In the example above I used code to make a change to the Class element – twice. I did a jQuery .removeClass(), followed by an addClass(), which triggered these two mutation records.

Note that you don’t have to look at the actual mutation record itself and you can use the MutationObserver merely as a mechanism that something has changed. In the jquery-watch plug-in I’m about to describe, the plug-in keeps track of the properties we’re interested in and it simply reads the properties from the DOM when a change is detected and acts upon that. While  a little less efficient it makes for much simpler code and more control over what you’re looking for.

Adapting the jquery-watch plug-in

So the updated version of the jquery-watch plug-in now uses the MutationObserver API with a fallback to setInterval() polling for events. The plug-in syntax has also changed a little to pass an options object instead of a bunch of parameters that were passed before in order to allow for additional options. So if you’re updating from an older version make sure you check your calls to this plug-in and adjust for the new parameter signature.

First add a reference to jQuery and the plug-in into your page:

<script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<script src="scripts/jquery-watch.js"></script>

Then simply call the .watch() method on a jQuery selector:

// hook up the watcher
$("#notebox").watch({
    // specify CSS styles or attribute names to monitor
    properties: "top,left,opacity,attr_class",

    // callback function when a change is detected
    callback: function(data, i) {
        var propChanged = data.props[i];
        var newValue = data.vals[i];

        var el = this;
        var el$ = $(this);

        // do what you need based on changes
        // or do your own checks
    }
});

The two required option parameters are shown here: A comma delimited list of CSS styles or Attribute names. Attribute names need to be prefixed with a attr_ as in attr_class or attr_src etc. You also need to give a callback function that receives notifications when a change event is raised. The callback is called whenever one of the specified properties changes. The callback function receives a data object and an index. The data object contains .props and .vals arrays, which hold properties monitored and the value that was just captured. The index is the index into these arrays for the property that triggered the change event in the first place. The this pointer, is scoped to the element that initiated the change event – the element jQuery-watch is watching.

Note that you don’t have to do anything with the parameters – in fact I typically don’t. I usually only care to be notified and then check other values to see what I need to adjust, or just set a number of values in batch.

A quick Example – Shadowing and Element

Let’s look at a silly example that nevertheless demonstrates the functionality nicely. Assume that I have two boxes on the screen and I want to link them together so that when I move one the other moves as well. I also want to detect changes to a couple of other states. I want to know when the opacity changes for example for a fade out/in so that both boxes can simultaneously fade. I also want to track the display style so if the box is closed via code the shadow goes away as well. Finally to demonstrate attribute monitoring, I also want to track changes to the CSS classes assigned to the element so I might want to monitor the class attribute.

Let’s look at the example in detail. There are a couple of div boxes on a page:

<div class="container">
        
    <div id="notebox" class="notebox">
        <p>
            This is the master window. Go ahead drag me around and close me!
        </p>
        <p>
            The shadow window should follow me around and close/fade when I do.
        </p>
        <p>
            There's also a timer, that fires and alternates a CSS class every
            3 seconds.
        </p>
    </div>

    <div id="shadow" class="shadow">
        <p>I'm the Shadow Window!</p>
        <p>I'm shadowing the Master Window.</p>
        <p>I'm a copy cat</p>
        <p>I do as I'm told.</p>
    </div>

</div>

#notebox is the master and #shadow is the slave that mimics the behavior in the master.

Here’s the page code to hook up the monitoring:

var el = $("#notebox");

el.draggable().closable();

// Update a CSS Class on a 3 sec timer var state = false; setInterval(function () { $("#notebox") .removeClass("class_true") .removeClass("class_false") .addClass("class_" + state); state = !state; }, 3000); // *** Now hook up CSS and Class watch operation el.watch({ properties: "top,left,opacity,display,attr_class", callback: watchShadow }); // this is the handler function that responds // to the events. Passes in: // data.props[], data.vals[] and an index for active item function watchShadow(data, i) { // you can capture which attribute has changed var propChanged = data.props[i]; var valChanged = data.vals[i]; showStatus(" Changed Property: " + propChanged + " - New Value: " + valChanged); // element affected is 'this' #notebox in this case var el = $(this); var sh = $("#shadow");

// get master current position var pos = el.position(); var w = el.outerWidth(); var h = el.outerHeight(); // and update shadow accordingly sh.css({ width: w, height: h, left: pos.left + w + 4, top: pos.top, display: el.css("display"), opacity: el.css("opacity") }); // Class attribute is more tricky since there are // multiple classes on the parent - we have to explicitly // check for class existance and assign sh.removeClass("class_true") .removeClass("class_false"); if (el.hasClass("class_true")) sh.addClass("class_true"); }

The code starts out making the #notebox draggable and closable using some helper routines in ww.jquery.js. This lets us test changing position and closing the #notebox so we can trigger change events. The code also sets up a recurring 3 second switch of a CSS class in the setInterval() code.

Then the actual $().watch() call is made to start observing various properties:

el.watch({
    properties: "top,left,opacity,display,attr_class",
    callback: watchShadow
});

This sets up monitoring for 4 CSS styles, and one attribute. Top and left are for location tracking, opacity handles the fading and display the visibility. attr_class (notice the attr_ prefix for an attribute) is used to be notified when the CSS class is changed every 3 seconds. We also provide a function delegate that is called when any of these properties change specifically the watchShadow function in the example.

watchShadow accepts two parameters – data and an index. data contains props[] and vals[] arrays and the index points at the items that caused this change notification to trigger. Notice that I assign the propChanges and newValue variables, but they are actually not used which is actually quite common. Rather I treat the code here as a mere notification and then update the #shadow object based on the current state of #notebox.

When you run the sample, you’ll find that the #shadow box moves with #notebox as it is dragged, fades and hides when #notebox fades, and adjusts its CSS class when the class changes in #notebox every 3 seconds. If you follow the code in watchShadow you can see how I simply recalculate the location and update the CSS class according to the state of the parent.

Note, you aren’t limited to simple operations like shadowing. You can pretty much do anything you like in this code block, such as detect a change and update a total somewhere completely different in a page.

The actual jquery-watch Plugin

Here’s the full source for the plug-in so you can skim and get an idea how it works (you can also look at the latest version on Github):

/// <reference path="jquery.js" />
/*
jquery-watcher 
Version 1.11 - 10/27/2014
(c) 2014 Rick Strahl, West Wind Technologies 
www.west-wind.com

Licensed under MIT License
http://en.wikipedia.org/wiki/MIT_License
*/
(function ($, undefined) {
    $.fn.watch = function (options) {
        /// <summary>
        /// Allows you to monitor changes in a specific
        /// CSS property of an element by polling the value.
        /// when the value changes a function is called.
        /// The function called is called in the context
        /// of the selected element (ie. this)
        ///
        /// Uses the MutationObserver API of the DOM and
        /// falls back to setInterval to poll for changes
        /// for non-compliant browsers (pre IE 11)
        /// </summary>            
        /// <param name="options" type="Object">
        /// Option to set - see comments in code below.
        /// </param>        
        /// <returns type="jQuery" /> 

        var opt = $.extend({
            // CSS styles or Attributes to monitor as comma delimited list
            // For attributes use a attr_ prefix
            // Example: "top,left,opacity,attr_class"
            properties: null,

            // interval for 'manual polling' (IE 10 and older)            
            interval: 100,

            // a unique id for this watcher instance
            id: "_watcher",

            // flag to determine whether child elements are watched            
            watchChildren: false,

            // Callback function if not passed in callback parameter   
            callback: null
        }, options);

        return this.each(function () {
            var el = this;
            var el$ = $(this);
            var fnc = function (mRec, mObs) {
                __watcher.call(el, opt.id, mRec, mObs);
            };

            var data = {
                id: opt.id,
                props: opt.properties.split(','),
                vals: [opt.properties.split(',').length],
                func: opt.callback, // user function
                fnc: fnc, // __watcher internal
                origProps: opt.properties,
                interval: opt.interval,
                intervalId: null
            };
            // store initial props and values
            $.each(data.props, function(i) {
                if (data.props[i].startsWith('attr_'))
                    data.vals[i] = el$.attr(data.props[i].replace('attr_',''));
                else
                    data.vals[i] = el$.css(data.props[i]);
            });

            el$.data(opt.id, data);

            hookChange(el$, opt.id, data);
        });

        function hookChange(element$, id, data) {
            element$.each(function () {
                var el$ = $(this);

                if (window.MutationObserver) {
                    var observer = el$.data('__watcherObserver');
                    if (observer == null) {
                        observer = new MutationObserver(data.fnc);
                        el$.data('__watcherObserver', observer);
                    }
                    observer.observe(this, {
                        attributes: true,
                        subtree: opt.watchChildren,
                        childList: opt.watchChildren,
                        characterData: true
                    });
                } else
                    data.intervalId = setInterval(data.fnc, interval);
            });
        }

        function __watcher(id,mRec,mObs) {
            var el$ = $(this);
            var w = el$.data(id);
            if (!w) return;
            var el = this;

            if (!w.func)
                return;

            var changed = false;
            var i = 0;
            for (i; i < w.props.length; i++) {
                var key = w.props[i];

                var newVal = "";
                if (key.startsWith('attr_'))
                    newVal = el$.attr(key.replace('attr_', ''));
                else
                    newVal = el$.css(key);

                if (newVal == undefined)
                    continue;

                if (w.vals[i] != newVal) {
                    w.vals[i] = newVal;
                    changed = true;
                    break;
                }
            }
            if (changed) {
                // unbind to avoid recursive events
                el$.unwatch(id);

                // call the user handler
                w.func.call(el, w, i, mRec, mObs);

                // rebind the events
                hookChange(el$, id, w);
            }
        }
    }
    $.fn.unwatch = function (id) {
        this.each(function () {
            var el = $(this);
            var data = el.data(id);
            try {
                if (window.MutationObserver) {
                    var observer = el.data("__watcherObserver");
                    if (observer) {
                        observer.disconnect();
                        el.removeData("__watcherObserver");
                    }
                } else
                    clearInterval(data.intervalId);
            }
            // ignore if element was already unbound
            catch (e) {
            }
        });
        return this;
    }
    String.prototype.startsWith = function (sub) {
        if (sub === null || sub === undefined) return false;        
        return sub == this.substr(0, sub.length);
    }
})(jQuery, undefined);

There are a few interesting things to discuss about this code. First off, as mentioned at the outset the key feature here is the use of the MutationObserver API which makes the fast and efficient monitoring of DOM elements possible. The hookChange() function is responsible for hooking up the observer and storing a copy of it on the actual DOM element so we can reference it later to remove the observer in the .unwatch() function.

For older browsers there’s the fallback to the nasty setInterval() code which simply fires a check at a specified interval. As you might expect this is not very efficient as the properties constantly have to be checked whether there are changes or not. Without a notification an interval is all we can do here. Luckily it looks like this is now limited to IE 10 and earlier for now which is not quite optimal but at least functional on those browser. IE 8 would still work with onPropertyChange but I decided not to care about IE 8 any longer. IE9 and 10 don’t have onPropertyChange event any longer so setInterval() is the only way to go there unfortunately.

Another thing I want to point out is that the __watcher() function which is the internal callback that gets called when a change occurs. It fires on all mutation event notifications and then figures out whether something we are monitoring has changed. If it is it forwards the call to your handler function.

Notice that there’s code like this:

if (changed) {
  // unbind to avoid event recursion
  el$.unwatch(id);

  // call the user handler
  w.func.call(el, w, i);

   // rebind the events
   hookChange(el$, id, w);
}

This might seem a bit strange – why am I unhooking the handler before making the callback call? This code removes the MutationObserver or setInterval() for the duration of the callback to your event handler.

The reason for this is that if you make changes inside of the callback that effect the monitored element new events are fired which in turn fire events again on the next iteration and so on. That’s a quick way to an endless loop that will completely lock up your browser instance (try it – remove the unwatch/hookchange calls and click the hide/show buttons that fade out – BOOM!).  By unwatching and rehooking the observer this problem can be mostly avoided.

Because of this unwatch behavior, if you do need to trigger other update events through your watcher, you can use setTimeout() to delay those change operations until after the callback has completed. Think long and hard about this though as it’s very easy to get this wrong and end up with browser deadlock. This makes sense only if you act on specific property changes and setting other properties rather than using a global update routine as my sample code above does.

Watch on

I’m glad I found the time to fix this plugin and in the process make it work much better than before. Using the MutationObserver provides a much smoother experience than the previous implementations – presumably this API has been optimized better than DOMAttrModified and onpropertychange were, and more importantly you can control what you want to listen for with the ability to only listen for changes on the actual element.

This is not the kind of component you need very frequently, but if you do – it’s very useful to have. I hope some of you will find this as useful as I have in the past…

Resources

© Rick Strahl, West Wind Technologies, 2005-2014
Posted in JavaScript  jQuery  HTML5  

by Rick Strahl at October 20, 2014 10:42 AM

Alex Feldstein

October 19, 2014

Alex Feldstein

October 18, 2014

Alex Feldstein

October 17, 2014

Alex Feldstein

The Unbelievers

This is wonderful movie were the camera follows Richard Dawkins and Lawrence Krauss through a series of conferences at universities all over the world.

In this video, Lawrence Krauss and the producers of "The Unbelievers" talk about the film, the story of science, and the necessary discussions about skepticism in regard to religious claims.

The movie is at Netflix.

image

by Alex Feldstein (noreply@blogger.com) at October 17, 2014 01:01 AM

Morgan Housel: Some Things to Remember About Market Plunges

Morgan Housel posted a great article, as it is usual for him.

“The funniest thing about markets is that all past crashes are viewed as an opportunity, but all current and future crashes are viewed as a risk.”

This should be something you should always remember.

See the whole article at The Motley Fool: http://www.fool.com/investing/general/2014/10/16/some-things-to-remember-about-market-plunges.aspx

by Alex Feldstein (noreply@blogger.com) at October 17, 2014 12:52 AM

October 16, 2014

Alex Feldstein

Rick Strahl's FoxPro and Web Connection Web Log

SSL Vulnerabilities and West Wind Products

Several people have sent me frantic email messages today in light of the latest POODLE SSL vulnerability and whether it affects West Wind products.

The vulnerability essentially deals with older SSL protocol bugs and the concern is especially on the server that protocol fallback from TLS to SSL can cause problems. Current protocols are TLS vs. the older SSL technology. Only the older SSL stacks are vulnerable. This is most critical on Web Servers, and you'll want to disable the older SSL protocols on IIS and leave only TLS enabled. TLS is the newer standard that superseded  SSL, although we still talk about SSL certificates – most modern certificates are actually TLS certificates and have support for SSL for fallback validation.

You can check your server’s SSL certificate status here:
https://www.ssllabs.com/ssltest/index.html

You’ll want to see that all versions of SSL are either disabled or at the very least that the server doesn’t support protocol fallback to SSL protocols.

When I ran this for my site running Server 2008 (IIS 7.0) I found SSL3 enabled, but the server not supporting automatic protocol fallback which avoids the main POODLE issue. I’ve since disabled SSL V3 on the server. The good news is, that while the certificate isn’t using maximum strength, the server was not found to be vulnerable from this issue even with the default settings in place.

Disabling SSL on IIS

If you see that SSL versions are enabled even if you have TLS certificates (which just about all of them should be by now) you can disable specific protocols. This is done through registry keys, but be aware that this is global for the entire machine. You can disable/enable protocols for both the client and server.

Here's a KB article from Microsoft to check out that tells you how to disable older SSL protocols.
http://support.microsoft.com/kb/187498

SSL and WinInet

The West Wind Client Tools and specifically the wwHTTP class rely on the Windows WinInet library to provide HTTP services. The POODLE issue is much less critical for the client as the client is the actually attacking entity. But I double checked here on my local machine and I can see that WinInet uses TLS 1.2 on my Windows 8.1 when connecting to a certificate on the server.

Capturing an https:// session from Fiddler shows the following request header signature on the CONNECT call:

Version: 3.3 (TLS/1.2)
Random: 54 3F 15 D6 B5 3E 6B F5 AD 71 41 FB 4C 39 B9 30 C5 21 04 A4 76 7F 87 A5 1A BA D6 83 19 B3 10 3B
SessionID: empty

which suggests TLS/1.2 as the initial request sent.

I don’t know what older versions of Windows – XP in particular use – but I suspect even XP will be using TLS/1.0 at the very least at this point. Maybe somebody can check (Use Fiddler, disable Decrypt HTTPS connections option, then capture HTTPS requests and look at the raw request/response headers.

Nothing to see here

From the West Wind perspective this issue is not specific to the tools, but to the operating system. So make sure the latest patches are installed and that if you have to remove the server SSL certificates. Client software is not really at risk, since the attack vector is a receiving Web Server. Regardless even the client tools appear to be using the latest safer protocols and so any West Wind tools are good to go.

by Rick Strahl at October 16, 2014 01:32 AM

October 15, 2014

Alex Feldstein

Article: How to ruin your life (most of us do)

One of my all-time favorite writers, Morgan Housel, gives excellent advice on how to ruin your financial life. Morgan is a staff writer for The Motley Fool. Until I became an avid Fool years ago, I was guilty of many of these behaviors, as most of us are. Pay attention!
Article: How ro ruin your life

by Alex Feldstein (noreply@blogger.com) at October 15, 2014 12:10 PM

October 14, 2014

Beth Massi - Sharing the goodness

Getting Started with the Office 365 APIs

This weekend I had the pleasure of speaking on a couple of Office Development topics at Silicon Valley Code Camp, as well as the East Bay.NET user group meeting on Thursday (with special Halloween guest). It was great to pack three talks into one week as I’ve been doing so much internal-facing work lately, that I have been really itching to get back out to speak in front of the developer community.

One of the areas I’ve been working in for a while is building SharePoint Apps. Office and SharePoint Apps let you customize the Office and SharePoint experiences with your own customizations. Apps are web-based, and you use HTML and JavaScript to customize Office (Outlook, Word, Excel, PowerPoint) and SharePoint itself.

image

For more info on apps, see the MSDN Library: Apps for Office and SharePoint

We’ve also been working on another programming model that I’m really jazzed about. It allows you to build your own custom apps and consume data from Office 365 (Sites, Mail, Calendar, Files, Users). They are simple REST OData APIs for accessing SharePoint, Exchange and Azure Active Directory from a variety of platforms and devices. You can also use these APIs to enhance custom business apps that you may already be using in your organization.

image

To make it even easier, we’ve built client libraries for .NET, Cordova and Android. The .NET libraries are portable so you can use them in Winforms, WPF, ASP.NET, Windows Store, Windows Phone 8.1, Xamarin Android/iOS,. There’s also JavaScript libraries for Cordova and an Android (Java) SDK available.

image

If you have Visual Studio this gets even easier by installing the Office 365 API Tools for Visual Studio extension. The tool streamlines the app registration and permissions setup in Azure as well as adds the relevant client libraries to your solution via NuGet for you.

Before you begin, you need to set up your development environment.

image

Note that the tools and APIs are currently in preview but they are in great shape to get started exploring the possibilities. Read about the client libraries here and the Office 365 APIs in the MSDN Library. More documentation is on the way!

Let’s see how it works. Once you install the tool, right-click on your project in the Solution Explorer and select Add – Connected Service...

image

This will launch the Services Manager where you log into your Office 365 developer site and select the permissions you require for each of the services you want to use.

image

Once you click OK, the client libraries are added to your project as well as sample code files to get you started. The client libraries help you perform the auth handshake and provide strong types for you to work with the services easier.

The important bits..

const string MyFilesCapability = "MyFiles";
static DiscoveryContext _discoveryContext;

public static async Task<IEnumerable<IFileSystemItem>> GetMyFiles()
{
    var client = await EnsureClientCreated();

    // Obtain files in folder "Shared with Everyone"
    var filesResults = await client.Files["Shared with Everyone"].
        ToFolder().Children.ExecuteAsync();
            
    var files = filesResults.CurrentPage.OrderBy(e => e.Name);

    return files;
}
    
public static async Task<SharePointClient> EnsureClientCreated()
{
    if (_discoveryContext == null)
    {
        _discoveryContext = await DiscoveryContext.CreateAsync();
    }

    var dcr = await _discoveryContext.DiscoverCapabilityAsync(MyFilesCapability);
            
    var ServiceResourceId = dcr.ServiceResourceId;
    var ServiceEndpointUri = dcr.ServiceEndpointUri;

    // Create the MyFiles client proxy:
    return new SharePointClient(ServiceEndpointUri, async () =>
    {
        return (await _discoveryContext.AuthenticationContext.
            AcquireTokenSilentAsync(ServiceResourceId, 
            _discoveryContext.AppIdentity.ClientId,
            new Microsoft.IdentityModel.Clients.ActiveDirectory
                .UserIdentifier(dcr.UserId, 
                Microsoft.IdentityModel.Clients.ActiveDirectory
                .UserIdentifierType.UniqueId))).AccessToken;
    });
}

This code is using the Discovery Service to retrieve the rest endpoints (DiscoverCapabilityAsync). When we create the client proxy, the user is presented with a login to Office 365 and then they are asked to grant permission to our app. Once they authorize, we can access their Office 365 data.

If we look at the request, this call:

    var filesResults = await client.Files["Shared with Everyone"].
        ToFolder().Children.ExecuteAsync();

translates to (in my case):

GET /personal/beth_bethmassi_onmicrosoft_com/_api/Files('Shared%20with%20Everyone')/Children

The response will be a feed of all the file (and any sub-folder) information stored in the requested folder.

Play around and discover the capabilities. There’s a lot you can do. I encourage you to take a look at the samples available on GitHub:

Also check out these video interviews I did this summer to learn more:

Enjoy!

by Beth Massi - Microsoft at October 14, 2014 04:28 PM

Alex Feldstein

October 13, 2014

FoxCentral News

No change - Last chance - Chicago FUDG - TUESday the 14th

 It is the last day to register for A SPECIAL EVENT: Christof Wollenhaupt will be presenting live and in person to the Chicago FUDG on Tuesday the 14th of Oct. previewing his SWFox presentation: "Lessons Learned from SQL Server". Please RSVP at the Eventbrite link on our web page. This is a must-see presention!!

by Chicago FoxPro Users and Developers Group at October 13, 2014 11:31 AM

Last change - Chicago FUDG - TUESday the 14th

 It is the last day to register for A SPECIAL EVENT: Christof Wollenhaupt will be presenting live and in person to the Chicago FUDG on Tuesday the 14th of Oct. previewing his SWFox presentation: "Lessons Learned from SQL Server". Please RSVP at the Eventbrite link on our web page. This is a must-see presention!!

by Chicago FoxPro Users and Developers Group at October 13, 2014 11:27 AM

Alex Feldstein

October 12, 2014

The Problem Solver

Using browserify to manage JavaScript dependencies

Managing JavaScript dependencies in the browser is hard. Library scripts typically create global variables and functions. Other scripts now depend on those global objects to do their work. This works but in order to load all required scripts we have to add <script> elements to our HTML, making sure to add them in the right order, and basically know what each exposes.

The problem

Consider the following client side code:

   1: // Print a message
   2: utils.print("Hello");

 

This depends on another piece of script below:

   1: // Expose the utility object with it's print function
   2: var utils = {
   3:     print: function(msg){
   4:         console.log(msg);
   5:     }
   6: };

 

And for all of that to work we have to load the scripts in the right order using some HTML as below:

   1: <!DOCTYPE html>
   2: <html>
   3: <head lang="en">
   4:     <meta charset="UTF-8">
   5:     <title>Browserify demo</title>
   6: </head>
   7: <body>
   8:  
   9:  
  10: <script src="utils.js"></script>
   1:  
   2: <script src="demo.js">
</script>
  11:  
  12: </body>
  13: </html>

 

Not really rocket science here but if we want update utils.print() to call a printIt() function loaded from yet another library we have to go back to our HTML and make sure we load the printIt.js as well. Easy in a small app but this can become hard and error prone with larger applications.

 

Browserify to the rescue

Using browserify will make managing these dependencies a lot easier. To understand how it works we first must take a quick look at how NodeJS modules work.

With node each module can take a dependency on another module by requiring it using the require() function. And each module can define what it exports to other modules by using module.exports. The NodeJS runtime takes care of loading the files and adding dependencies inside a module will not require a change anywhere else in the program.

This system works really nice but unfortunately the browser doesn’t provide this NodeJS runtime capability. One problem here is that a call to require() is a synchronous call that returns the loaded module while the browser does all of its IO asynchronously. In the browser you can use something like RequireJS to asynchronously load scripts but while this works file this is not very efficient due to its asynchronous nature. As a result people usually use RequireJS during development and then create a bundle with all the code for production.

Browserify on the other hand will allow us to use the synchronous NodeJS approach with script loading in the browser. This is done by packaging up all files required based on the require() calls and creating one file to load at runtime. Converting the example above to use this style requires some small changes in the code.

The demo.js specifies it requires utils.js. The syntax “./utils” means that we should load the file from the same folder.

   1: var utils = require("./utils");
   2: // Print a message
   3: utils.print("Hello");

 

Next the utils.js specifies what it exports:

   1: // Expose the utility object with it's print function
   2:  
   3: var utils = {
   4:     print: function(msg){
   5:         console.log(msg);
   6:     }
   7: };
   8:  
   9: module.exports = utils;

 

Next we need to run browserify to bundle the file for use in the browser. As browserify is a node application we need to install node and then, through the node package manager NPM, install browserify with

   1: npm install -g browserify

 

With browserify installed we can bundle the files into one using:

   1: browserify demo.js > bundle.js

This will create a bundle.js with the following content:

   1: (function e(t,n,r){function s(o,u){if(!n[o]){if(!t[o]){var a=typeof require=="function"&&require;if(!u&&a)return a(o,!0);if(i)return i(o,!0);var f=new Error("Cannot find module '"+o+"'");throw f.code="MODULE_NOT_FOUND",f}var l=n[o]={exports:{}};t[o][0].call(l.exports,function(e){var n=t[o][1][e];return s(n?n:e)},l,l.exports,e,t,n,r)}return n[o].exports}var i=typeof require=="function"&&require;for(var o=0;o<r.length;o++)s(r[o]);return s})({1:[function(require,module,exports){
   2: var utils = require("./utils");
   3: // Print a message
   4: utils.print("Hello");
   5:  
   6: },{"./utils":2}],2:[function(require,module,exports){
   7: // Expose the utility object with it's print function
   8:  
   9: var utils = {
  10:     print: function(msg){
  11:         console.log(msg);
  12:     }
  13: };
  14:  
  15: module.exports = utils;
  16: },{}]},{},[1]);

 

Not the most readable but then that was not what it was designed to do. Instead we can see all code we need is included. Now by just including this generated file we ready to start our browser application.

Adding the printIt() function

Doing the same change as above is simple and best of all doesn’t require any change to the HTML to load different files. Just update utils.js to require() printIt.js and explicity export the function in printIt.js, rerun browserify and you are all set.

   1: function printIt(msg){
   2:     console.info(msg);
   3: }
   4:  
   5: module.exports = printIt;

 

Note that it’s fine to just export a single function here.

 

   1: // Expose the utility object with it's print function
   2: var printIt = require("./printIt");
   3:  
   4: var utils = {
   5:     print: function(msg){
   6:         printIt(msg);
   7:     }
   8: };
   9:  
  10: module.exports = utils;

And the result of running browserify is:

   1: (function e(t,n,r){function s(o,u){if(!n[o]){if(!t[o]){var a=typeof require=="function"&&require;if(!u&&a)return a(o,!0);if(i)return i(o,!0);var f=new Error("Cannot find module '"+o+"'");throw f.code="MODULE_NOT_FOUND",f}var l=n[o]={exports:{}};t[o][0].call(l.exports,function(e){var n=t[o][1][e];return s(n?n:e)},l,l.exports,e,t,n,r)}return n[o].exports}var i=typeof require=="function"&&require;for(var o=0;o<r.length;o++)s(r[o]);return s})({1:[function(require,module,exports){
   2: var utils = require("./utils");
   3: // Print a message
   4: utils.print("Hello");
   5:  
   6: },{"./utils":3}],2:[function(require,module,exports){
   7: function printIt(msg){
   8:     console.info(msg);
   9: }
  10:  
  11: module.exports = printIt;
  12:  
  13: },{}],3:[function(require,module,exports){
  14: // Expose the utility object with it's print function
  15: var printIt = require("./printIt");
  16:  
  17: var utils = {
  18:     print: function(msg){
  19:         printIt(msg);
  20:     }
  21: };
  22:  
  23: module.exports = utils;
  24: },{"./printIt":2}]},{},[1]);


Again not the most readable code but the printIt() function is now included. Nice and no changes required to the HTML :-)


Proper scoping


As a side benefit browserify also wraps all our JavaScript files in a function ensuring that proper scope for variables is used and we don’t accidently leak variables to the proper scope.


 


Using browserify works really nice but this way we do have to start it after every time. In the next blog post I will show how to use Gulp or Grunt to automate this making the workflow a lot smoother.


 


Enjoy!

by Maurice de Beijer at October 12, 2014 07:30 PM

Alex Feldstein

October 11, 2014

FoxProWiki

BlogWatch

Links to blogs that have something to do with VFP, or by VFP folks

October 11, 2014 12:02 PM

Alex Feldstein

October 10, 2014

VisualFoxProWiki

RickSchummer

Rick Schummer is the president and lead geek at his company White Light Computing, which is headquartered in southeast Michigan, USA. He prides himself in guiding his customers' Information Technology investment toward success. He enjoys working with top-notch developers; has a passion for developing software using best practices, and for surpassing customer expectations, not just meeting them. After hours he writes developer tools that improve productivity and occasionally pens articles for Fox RockX, Fox Talk, Advisor Guide to Microsoft Visual FoxPro (formerly FoxPro Advisor), CoDe Magazine, and several user group newsletters.

Rick owns another company called Geek Gatherings, LLC focusing on training developers through the use of conferences and formal training sessions. The initial offering of Geek Gatherings, LLC is the Southwest Fox Conference. In 2012 we added the Southwest Xbase++ Conference for developers who use Alaska Software's Xbase++. These conference are currently held in Gilbert Arizona at the SanTan Elegante Conference & Reception Center/Legado Hotel. Advanced training sessions remain in the planning stages. You can register through http://geekgatherings.com.

Previous to these endeavors Rick was a Partner at Geeks and Gurus, Inc. for three years, and served as Director of Development and as a Senior Application Developer for Kirtland Associates, Inc. for two and a half years. He not only wrote code, researched and evaluated technology, and managed customer projects for this organization, but also participated in the education of new and experienced Visual FoxPro developers. Prior to Kirtland, Rick worked for Electronic Data Systems (EDS) for nine years as an Advanced Systems Engineer. There Rick wrote a number of applications for General Motors (Fortune 7, although Fortune 1 when he was there {g}) and GMAC Financial Services. His applications still run in over 300 locations across North and South America. Each of these experiences taught Rick what to do and not to do in his own company.

Awarded the FoxPro Lifetime Achievement Award at Southwest Fox 2010 Conference. This award starts with nominations from the Fox Community, and is determined by a committee of past recipients, Y. Alan Griver from Microsoft, and a representative from the Fox Community. Rick was literally speechless when his name was announced by Doug Hennig, and is very honored to be recognized by his peers in a community he loves being part of.

He is the architect and lead developer of the WLC Hack CX Professional tool available from White Light Computing and is not-so-hard at work on a Menu Designer replacement, expected to ship before the end of the century. You can find a smattering of free tools on his Web site as well.

In February 2002 Microsoft awarded Rick their Microsoft Most Valuable Professional (MVP). He was recognized again for the year 2002-2003, 2003-2004, 2004-2005, 2005-2006, 2006-2007, 2007-2008, 2008-2009, 2009-2010, and 2010-2011.

Looking for details on installing VFP 9 SP2 side-by-side with other versions of VFP 9? Check out this white paper with all the details:

http://www.whitelightcomputing.com/resources/VFP9AllVersionsOnOneComputer.pdf
http://rickschummer.com/blog2/2008/03/vfp-9-rtmsp1sp2-one-machine/

Writing Credentials:
He has used all DOS and Windows versions of Fox since FoxBase+ and started writing with dBASE III and Buttonware's PC-File way back in the mid 1980's.

Rick is a founding member and secretary of the Detroit Area Fox User Group (DAFUG) and past president and secretary of the Sterling Heights Computer Club. He is a regular presenter at the meetings of these organizations, presents at Fox user groups and conferences across North America, has presented for Microsoft at Dev Days, at the Great Lakes Great Database Workshop 2000-2003 and 2006, Essential Fox 2002-2004, Southwest Fox 2004-2014, VFE DevCon 2002 and 2005, German DevCon 2005-2014, Advisor DevCon 2006-2007, OzFox 2007, and DevLink 2010-2011.

Blog: Shedding Some Light found at http://www.rickschummer.com/blog2
Twitter: @rschummer

raschummer@whitelightcomputing.com, rick@rickschummer.com
http://www.whitelightcomputing.com, http://www.rickschummer.com, http://swfox.net

October 10, 2014 10:32 PM

Alex Feldstein

October 09, 2014

FoxProWiki

WillsonDeVeas

Editor comments: Updated for 2014 (formerly dormant since 2011?!)

October 09, 2014 08:48 PM

Alex Feldstein

Rick Strahl's Web Log

Chrome DevTools Debugging Issues

So it looks like Chrome 39 Canary has this issue fixed. Let’s hope when v39 release lands it’s still be working…

Since the last few Chrome releases have come out (v38 as of this writing), I’ve had some major issues with debugging not working properly. The behavior I see is pretty strange but it’s repeatable across different installations, so I thought I’d describe it here and then link it to a bug report.

What’s happening is that I have had many instances where the debugger is stopping on a breakpoint or debugger; statement, but is not actually showing the source code. I can see the debugger is stopping because the black screen pops up and I can see the play button in the debugger window.

What’s odd is that it works and the debugger stops the first time after I start the browser. If I reload the page a second or third time though, the debugger still stops, but doesn’t show the source line or even the right source file.

This is not an isolated instance either. I initially started seeing this issue with an Angular application, where the debugger would exhibit this same behavior in some situations, but not in others. Specifically it appeared the debugger worked in straight ‘page load’ type code – it stops and shows source code properly. But when setting a breakpoint inside of event code – an ng-click operation for example – the debugger again would stop, but not show the source code.

Example

So here’s a simple example from http://west-wind.com/websurge/features.aspx. I kept the script inline for keeping it simple, but whether the script is embedded or external really makes no difference to the behavior I see.

The page has a small bit of copied script in it that scrolls the page when you click one of the in page anchor links that navigate to hash tags that exist in the page. The code works now, but I initially had to make a few changes to make it work on my page from the original source.  Inside of jquery click handler I have the following code:

$("a[href*=#]:not([href=#])").on("click", function (e) {
   console.log('scrolling');
   debugger;

Now when I do this on my local machine I get the following in Chrome 38:

ChromeDebugError

In this example, because it’s all one page the page at least is loaded, but when I had problems with my Angular app, the right source file wasn’t even opened.

Now if I hit the same exact page (just uploaded) on my live site I get the proper debugger functionality – but only on the first load. Reloading the page again after a restart I see the same behavior I see on localhost.

First load looks like this (correct behavior):

ChromeOnlineWorks 

But then subsequent requests fail again…

What I’ve Tried

My initial thought has been that there’s something wrong with my local Chrome installation, so I completely uninstalled Chrome, and Canary, rebooted and the reinstalled Chrome from scratch. But I got no relief from that exercise. I was hopeful that Chrome 38 which landed today (and replaced the generally messy 37 release) might help but unfortunately the problem persists.

I also disabled all plug-ins but given that my version on a remote machine worked with all plug-ins running makes me think it’s not the plug-ins.

Still thinking it might be something machine specific I fired up one of my dev VMs and tried checking out the code in there – and guess what same behavior. So it doesn’t look like this is a configuration issue in Chrome but some deeper bug with the source parsing engine.

I had also thought that with the Angular app earlier the problem might have been some issue with script parsing or map files, but even using non-minified scripts I ended up with the same issue.

I also experimented with the breakpoint options in the browser’s source tab which lets you disable breakpoints from stopping. This had no effect, since it doesn’t appear this option affects debugger statements, only actual breakpoints set in the debugger itself.

Finally I tried the nuclear option: I ran the Chrome Software Removal Tool to completely nuke and reset my settings. It removes plug-ins, clears history and cookies, resets config: settings and otherwise completely resets Chrome. Other than plug-ins I don’t really have much in the way of customizations, so I didn’t think this would really help and sure enough it didn’t – the errant behavior continues.

Update: Looks like Chrome 40 has fixed this behavior

I noticed that when installing a Canary update yesterday, started to show the problem going away. In Canary Version 40+ breakpoints are doing the right thing and showing source code!

Nasty Bug

This is an insidious bug – and it’s been plaguing me for a few weeks now. In this page this isn’t exactly a big deal but in a recent larger AngularJs app I was working on I constantly ran into this problem and it was bad enough I ended up switching to FireFox for all debugging purposes. FireFox and Firebug work fine (as do the IE DevTools) but I generally prefer running in Chrome because overall the tools are just a little easier to work with in my daily workflow, so I’d like to get to the bottom of this issue.

So my question is – has anybody else run into this weird problem where some pages are not debugging? Any ideas on what to else to try? I did submit an issue to Google – lets see if anything comes of that.

© Rick Strahl, West Wind Technologies, 2005-2014

by Rick Strahl at October 09, 2014 06:02 AM

Alex Feldstein

October 08, 2014

FoxProWiki

BlogWatch

Editor comments: Blogs gone missing

October 08, 2014 03:43 PM