Planet FoxPro

July 30, 2015

Alex Feldstein

July 29, 2015

Alex Feldstein

VisualFoxProWiki

VFPMultithreading

Editor comments: updated multithreading vfp samples
VFP applications can't spawn multiple threads natively, so you need a 3rd party library
to do it:
-VFP2C32.FLL ( VFPX https://vfpx.codeplex.com/wikipage?title=VFP2C32&referringTitle=Home )
-DMULT.DLL ( foxpert.com http://www.foxpert.com/download/DMULT.ZIP )
-PARALLELFOX ( VFPX https://vfpx.codeplex.com/wikipage?title=ParallelFox&referringTitle=Home )

Multithreading vfp demos:

http://www.foxite.com/archives/0000419592.htm
https://kevinragsdale.net/easy-multithreading-with-visual-foxpro/

July 29, 2015 01:33 AM

July 28, 2015

Alex Feldstein

July 27, 2015

Rahul Desai's Blog

Magic Quadrant for Application Development Life Cycle Management

Gartner evaluated ADLM providers to help application development managers and other IT leader select appropriate technology…more at the link below:

Magic Quadrant for Application Development Life Cycle Management

Magic Quadrant

Figure 1. Magic Quadrant for Application Development Life Cycle Management

Figure 1.Magic Quadrant for Application Development Life Cycle Management

Enlarge

Close

Source: Gartner (February 2015)

    Microsoft

    Consider Microsoft if you are heavily invested in the Microsoft development ecosystem. The vendor offers a broad suite of functionality available either on-premises or in the cloud. Growing support of open-source technologies and community participation aides in opening up the tools for a broader set of platforms. With strong support for project-level agile, Microsoft can handle almost all your needs to manage a .NET development process.

    Strengths

  • The vendor has a clear strategic direction, and offers ADLM functionality that is easy to implement.

  • The Microsoft Developer Network (MSDN) provides a significant pool of training materials and access to software.

  • Microsoft frequently rolls out additional and enhanced functionality in an agile fashion via SaaS, but uses a less frequent cadence with on-premises installations to avoid disruptions.

  • The vendor understands agile principles better than most of the integrated ADLM suite vendors.

    Cautions

  • Despite credible support for other platforms, Microsoft struggles to penetrate development organizations outside of the .NET world.

  • The shift toward mobile as a dominant platform presents an opportunity for competitors to undermine the "all-Microsoft" approach.

  • The vendor lacks a stand-alone requirements management approach. Instead, it takes an enhance-and-integrate Office approach, as well as relying on partners (e.g., eDevTech’s 4TFS product line).

  • Microsoft lacks the agile depth of pure-play vendors around the enterprise agile capabilities of project portfolio analysis, and support of SAFe.

by Rahul Desai at July 27, 2015 09:10 PM

Dynamics CRM Developer Extensions

An alternative to developer toolkit that ships with CRM SDK, and it works with VS 2012/13/15.

Dynamics CRM Developer Extensions

by Rahul Desai at July 27, 2015 08:45 PM

FoxCentral News

West Wind Web Service Proxy Generator 1.35 released

 West Wind Technologies has released an update to the Web Service Proxy Generator, which automates the process for creating FoxPro clients to complex SOAP 1.x Web Services. The tool makes calling Web Services as easy as calling methods on a generated FoxPro proxy class. This release adds updated WSDL parser that handles a few additional edge cases for parsing multii-segmentedlinked WSDL files. There are also a host of updates for wwDotnetBridge to facilitate interaction with many more new .NET types including, long, char, byte, Single, DbNull and more. The Proxy Generator is available as shareware and registered users can simply re-download the registered version for a free update.

by West Wind Technologies at July 27, 2015 07:07 AM

Alex Feldstein

July 26, 2015

Alex Feldstein

July 25, 2015

Alex Feldstein

July 24, 2015

Articles

Installing Visual Studio Code on Linux (Ubuntu)

During this year's //build conference Microsoft officially announced a new member of the Visual Studio series called Code. As described by several people already it is a HTML5, JavaScript/TypeScript based text editor hosted inside the Electron shell and it runs natively on Windows, Mac OS X and Linux. This article hopefully gives you some ideas during installation and assistance to have an improved experience out of the box compared to the standard option - at least at the time of writing this article.

Getting Visual Studio Code

I started using Visual Studio Code since the first released version 0.1.0, and being part of the Insider Preview program for VS Code I managed to download and get the latest version always using this short-listed link:

http://aka.ms/vscode

Which is an alias for this web address: https://code.visualstudio.com/

Get the latest version of Visual Studio Code from the web site
Get the latest version of Visual Studio Code from the web site

Microsoft's web site of Code detects your operating system and directly offers you the best download option based on your current browser. I'm currently running Xubuntu 15.04 x64 - Vivid Vervet and the site offers me a direct link to get the latest 64-bit version of Visual Studio Code. In case that you'd like to download a different version please scroll down to the bottom of the site and check the additional options.

Note: Originally, I started using Code 0.1.0 on Xubuntu 14.10 and then upgraded my machine around mid of May. Also, on a different machine running Ubuntu 14.04 LTS I can confirm to use Visual Studio Code successfully.

Unzip the archive

After you downloaded the latest ZIP archive for your architecture, here: VSCode-linux-x64.zip, you should decide where to extract the content of the compressed file. Well, in my case, I'd like to have third party products below the appropriate location, and therefore I usually choose /opt. Eventually you might ask yourself why? Well, here's a decent chapter about the Linux Filesystem Hierarchy written by The Linux Documentation Project (TLDP):

1.13 /opt

This directory is reserved for all the software and add-on packages that are not part of the default installation. For example, StarOffice, Kylix, Netscape Communicator and WordPerfect packages are normally found here. To comply with the FSSTND, all third party applications should be installed in this directory. Any package to be installed here must locate its static files (ie. extra fonts, clipart, database files) must locate its static files in a separate /opt/'package' or /opt/'provider' directory tree (similar to the way in which Windows will install new software to its own directory tree C:\Windows\Progam Files\"Program Name"), where 'package' is a name that describes the software package and 'provider' is the provider's LANANA registered name.

Looks good to me, or?

Anyway, let's just use this as base - given that you're root on the machine - it's surely a good choice, otherwise feel free to unzip the archive in your personal user space below your home directory. Next, let's extract the content as suggested using the console (or terminal in case that you'd prefer this term):

$ cd /opt
/opt$ sudo unzip ~/Downloads/VSCode-linux-x64.zip

This is going to create a new directory VSCode-linux-x64 which contains the static binary to run Visual Studio Code on your system. Right now, you would be able to launch the text editor by executing the following command:

/opt$ ./VSCode-linux-x64/Code

Despite some warnings and errors on the console output, similar to those:

[3437:0724/220852:ERROR:browser_main_loop.cc(173)] Running without the SUID sandbox! See https://code.google.com/p/chromium/wiki/LinuxSUIDSandboxDevelopment for more information on developing with the sandbox on.
bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell

Visual Studio Code is up and running...

Welcome screen of Visual Studio Code on first start of the text editor
Welcome screen of Visual Studio Code on first start of the text editor

Adding a little bit more comfort

Hopefully, you were able to launch Visual Studio Code based on the description given above. Now, let's add a little bit more comfort to your user experience. Unfortunately, there is no out-of-the-box installation package for the usual distributions - at least not yet, and we are obliged to do some manual steps. Following, I'm going to give you my steps with some brief explanations about the why and how. Of course, there are always multiple choices and you might either skip one or the other step or even have better suggestions. Please use the comment section at the bottom to give me your tips & tricks. Thanks!

Version-(in)dependent folder and symbolic link

Not sure about you but given the manual installation steps I would like to have a better control each time I consider to install a newer version of Code. Also, this helps to keep some adjustments on constant path information like Application launcher and shortcuts to run Visual Studio Code. Okay, let's dig into that and first rename (move) the base directory of Code to a version-specific one:

/opt$ sudo mv VSCode-linux-x64 VSCode-0.5.0

Again, as of writing this article 0.5.0 was the latest available version. Meanwhile, the are good chances that you might have a higher version already - good! Next, I usually create a symbolic soft link back to the newly renamed folder in order to stay version-independent. Sounds confusing, right? Hold on, I'll explain it in a short, and you will see the benefits, too.

/opt$ sudo ln -s VSCode-0.5.0 VSCode

Your own /opt folder might look similar to this one right now:

Extract the Visual Studio Code zip archive below /opt directory and create a version-independent symlink
Extract the Visual Studio Code zip archive below /opt directory and create a version-independent symlink

As you can see on the screenshot I've been using Code since the very beginning, and using this approach I am actually able to keep all versions installed side-by-side next to each other. The most interesting part is the version-independent symlink in the /opt directory. This allows me to launch Visual Studio Code by executing the following line from anywhere:

/opt/VSCode/Code

Like using the Application Finder on Xubuntu after pressing Alt+F2:

Launch Visual Studio Code from the Application Finder with fully qualified path to executable
Launch Visual Studio Code from the Application Finder with fully qualified path to executable

This scenario gives us a good head start for further activities.

The power of PATH

Now that we have a "fixed" location for Visual Studio Code, it would be more comfortable to avoid to specify the full path information each time that we would like to launch the text editor. Also, looking to some of the cool command line options of Code on other platforms, it would be nice to have them as well on Linux. Okay, then let's do it using the PATH environment variable. The Linux Information Project has a good definition online:

PATH Definition

PATH is an environmental variable in Linux and other Unix-like operating systems that tells the shell which directories to search for executable files (i.e., ready-to-run programs) in response to commands issued by a user. It increases both the convenience and the safety of such operating systems and is widely considered to be the single most important environmental variable.

That sounds exactly like what we are looking for. And in compliance with other operating systems, we are going to create another symlink for our purpose, like this:

~$ sudo ln -s /opt/VSCode/Code /usr/local/bin/code

Changing the letter casing of the executable from proper writing - Code - to lower case writing - code - isn't a typo actually.

Commonly, UNIX and Linux commands are written in lower-case writing anyway, so why should we break with this tradition? Of course, you will be able to launch the text editor now with this new path, too. Either on the console / terminal, like so

~$ code

or using the Application Finder - the choice is yours.

Launch Visual Studio Code from the Application Finder
Launch Visual Studio Code from the Application Finder

Thanks to the PATH environment variable we can now completely omit the path information. Linux knows where to find our executable now.

Application launcher in Main Menu

Being able to start Visual Studio Code anywhere from the console has already given us some comfort but compared to Windows and Mac OS X users we are still living in the digital stone age, and no application is fully installed on your Linux OS without an application launcher in your main menu. In Xubuntu you would open Application Menu (or press Alt+F1) - Settings - Main Menu in order to add a new launcher to the menu. In the menu editor select the Development section or any other where you would like to place the launcher and click on New Item to define the Launcher Properties. Eventually, you might like to enter the following on your machine:

Add a new item to the main menu for Visual Studio Code
Add a new item to the main menu for Visual Studio Code

Unfortunately, this leaves us with an empty icon for now. Quickly open a new terminal and switch to an existing one and let's see which graphics are provided by Microsoft, like so:

~$ find /opt/VSCode/* -type f -iname '*.png'
/opt/VSCode/resources/app/vso.png
/opt/VSCode/resources/app/client/vs/base/ui/scrollbar/impl/arrow-up.png
/opt/VSCode/resources/app/client/vs/base/ui/scrollbar/impl/arrow-left.png
/opt/VSCode/resources/app/client/vs/base/ui/scrollbar/impl/arrow-right.png
/opt/VSCode/resources/app/client/vs/base/ui/scrollbar/impl/arrow-right-dark.png
/opt/VSCode/resources/app/client/vs/base/ui/scrollbar/impl/arrow-left-dark.png
/opt/VSCode/resources/app/client/vs/base/ui/scrollbar/impl/arrow-down-dark.png
/opt/VSCode/resources/app/client/vs/base/ui/scrollbar/impl/arrow-down.png
/opt/VSCode/resources/app/client/vs/base/ui/scrollbar/impl/arrow-up-dark.png
/opt/VSCode/resources/app/client/vs/editor/diff/diagonal-fill.png
/opt/VSCode/resources/app/client/vs/editor/css/arrow-left.png
/opt/VSCode/resources/app/client/vs/editor/css/arrow-right.png
/opt/VSCode/resources/app/client/vs/workbench/contrib/daytona/TestPlugin/Resources/Images.png
/opt/VSCode/resources/app/client/vs/workbench/contrib/daytona/TestPlugin/Images/FileIdentifier.png
/opt/VSCode/resources/app/client/vs/workbench/contrib/daytona/TestPlugin/Images/icon2.png
/opt/VSCode/resources/app/client/vs/workbench/contrib/daytona/TestPlugin/Images/icon3.png
/opt/VSCode/resources/app/client/vs/workbench/contrib/daytona/TestPlugin/Images/icon1.png
/opt/VSCode/resources/app/client/vs/workbench/contrib/daytona/TestPlugin/Images/console-icons.png
/opt/VSCode/resources/app/client/vs/workbench/ui/parts/editor/media/letterpress.png
/opt/VSCode/resources/app/client/vs/workbench/ui/parts/editor/media/letterpress-dark@2x.png
/opt/VSCode/resources/app/client/vs/workbench/ui/parts/editor/media/letterpress-dark.png
/opt/VSCode/resources/app/client/vs/workbench/ui/parts/editor/media/letterpress@2x.png
/opt/VSCode/resources/app/node_modules/emmet/Icon.png

Alternatively, you might also have a look at the SVG graphics provided by Visual Studio Code.

I chose the vso.png and to simplify my life in regards of future upgrades and unexpected changes, I placed a copy of the graphic file into the usual location on a Linux system:

~$ sudo cp /opt/VSCode/resources/app/vso.png /usr/share/icons/

Hint: Use the Move option in the window menu to relocate the dialog using the arrow keys, and then confirm your selection with a click on the OK button of the dialog.

Your Main Menu editor might look like this now:

Visual Studio Code as proper entry in the main menu of Xubuntu
Visual Studio Code as proper entry in the main menu of Xubuntu

Congratulations, your new application launcher has been added to the menu and you can either navigate into the Development section (or the one you chose) or type your choice into the application quick filter textbox to find and execute Visual Studio Code.

Navigate the application menu to launch Visual Studio Code
Navigate the application menu to launch Visual Studio Code

Use the quick filter entry of the application menu to launch Visual Studio Code
Use the quick filter entry of the application menu to launch Visual Studio Code

Creating a Desktop Entry file

As we are working with Linux there are always multiple ways to achieve the same or similar result. And eventually you might prefer the possibility to create and use a file-based application launcher which adds itself to the menu structure automatically. Creating a .desktop file is not too challenging and requires a simple text editor - like Visual Studio Code ;-) - to write the following definition into it:

[Desktop Entry]
Version=1.0
Encoding=UTF-8
Name=Visual Studio Code
GenericName=Integrated Development Environment
Comment=Code Editing. Redefined. Build and debug modern web and cloud applications.
Exec=code
TryExec=code
Icon=vso
StartupNotify=true
Terminal=false
Type=Application
MimeType=text/x-csharp;application/x-mds;application/x-mdp;application/x-cmbx;application/x-prjx;application/x-csproj;application/x-vbproj;application/x-sln;application/x-aspx;text/xml;application/xhtml+xml;text/html;text/plain;
Categories=GNOME;GTK;Development;IDE;

Save it as vscode.desktop and then put this file into the appropriate location for a Linux system:

~$ sudo cp vscode.desktop /usr/share/applications/vscode.desktop

Thanks to the proper location of the shared icon and the symlinks we created earlier, we do not have to specify any absolute paths in our Desktop Entry file. As soon as the file has been copied below the shared applications folder it automatically appears in your main menu and is ready to be used.

For your extra comfort you might like to download the vscode.desktop file. You will have to rename the file and place it accordingly on your system.

Make it a launcher in Cairo Dock

As for the different options of Ubuntu I have to admit that I'm a long-year user of the Xfce environment, called Xubuntu, and on top I also like using a flexible dock panel (or two or three). Cairo dock is a fantastic package in case that you would like to have a little bit of Mac OS X flavour on your Linux desktop, and adding a launcher for Visual Studio Code is very simply to do.

Add Visual Studio Code to a dock panel like cairo dock or similar
Add Visual Studio Code to a dock panel like cairo dock or similar

First, run Visual Studio Code using one of the previously described methods. Next, after the application runs and an icon of code appears in the dock panel right-click the icon, then select the sub-menu entry "Make it a launcher" from the "code" context menu entry and you're done. That's actually similar to pinning an application to the taskbar in Windows 7, Windows 8 or Windows 10. Close the text editor and your new launcher will still remain in the dock panel.

Resume on installing Visual Studio Code

Without any question it is fantastic to have an identical text editor for all three major operating system. But Linux users are currently confronted with some lack of comfort compared to their Windows and Mac OS X friends. Although there are several and in my opinion easy ways to increase the user experience in using Visual Studio Code under Linux I'm a bit concerned whether Microsoft is keeping it on par to the other systems. Right now, installation takes some manual steps, there are essential parts missing in order to provide an excellent first contact and other editor features like automatic updates aren't yet available for the Linux variation compared to Windows and Mac OS X.

Bearing in mind that the product has been launched back in April/May this year only and we are currently on version 0.5.0, I am very interested in the future development. The documentation online has some neat features for you, and the team at Microsoft has an open ear to the feedback and wishes given on their UserVoice website, too.

That's all for the installation part of Visual Studio Code. Please leave your comments as well as tips & tricks for me.

Happy coding!

by Jochen Kirstaetter (jochen@kirstaetter.name) at July 24, 2015 05:42 PM

Alex Feldstein

July 23, 2015

Alex Feldstein

July 22, 2015

CULLY Technologies, LLC

php5-fpm trouble with nginx using 100% CPU

I have a client that has a website that I’ve moved over to DigitalOcean. Today, it started having trouble. The CPU started getting pegged with php5-fpm processes. I’m still trying to figure out what is going on.

  • The error log is located in /var/log/nginx/error.log
  • The configuration file is located at /etc/php5/fpm/pool.d/www.conf

I can reset the process with a sudo service php5-fpm restart but before too long, the php-fpm: pool www processes end up taking up 100% of the CPU.

There were some error messages in the errors.log file that indicated that memory may be an issue. I found an article recommending that I look in the php.ini file (locate php.ini from the command prompt). There were several files an the /etc/php5/cli/php.ini file had the setting of memory_limit = -1 which basically means “unlimited”. I set that to be memory_limit = 128M and restarted. We’ll see if that resolves the issue.

Well, these are my notes and hopefully I’ll stumble upon a solution.

by kcully at July 22, 2015 07:25 PM

Alex Feldstein

July 21, 2015

Rahul Desai's Blog

Alex Feldstein

Beth Massi - Sharing the goodness

Visual Studio 2015 and .NET Framework 4.6 released today--behind the scenes goodies, t-shirts, and lots of open development

046dcc4e-bbe6-4485-bd74-5afa0650ee07[1]You probably heard that Visual Studio 2015 and .NET Framework 4.6 released today. Congrats team! This is a big day for us as we’ve all been working really hard on this release for many many months. Get the bits now!

I had the pleasure of showing off the awesome productivity, debugging and diagnostic tools kicking off the launch keynote with Soma. That was pretty darn fun. I personally thought I was wearing too much makeup but that’s what studio people make you do ;-). Amanda Silver and Scott Hanselman also joined Soma on stage to show off a slew of mobile and web development tools. Scott also touched on new features in C#, ASP.NET 5 and .NET Core which is our open source & cross-platform stack. If you missed it, definitely check it out.

Keynote: Visual Studio 2015 - Any app, Any developer
You can also learn more by watching deeper videos that drill into new features in Visual Studio 2015: Visual Studio 2015 Final Release Event

By the way, I mentioned at the end of my demo that Roslyn (the .NET Compiler for VB & C#) is on GitHub. There’s actually a lot of activity on GitHub. Check out our repos and contribute!

I have to say that this was one of my favorite Visual Studio launches ever – and I’ve been using Visual Studio for a very long time. What was really special about it IMO was we really showed our customers how we’ve embraced open source and open development. It’s been a big culture change for us and we’re excited for the future. It’s a great time to be a developer! Watch this short video on how we build Visual Studio, many parts out in the open. I am so proud to be on this team.

YouTube: Building Visual Studio 2015

And thanks for all the encouraging comments on twitter! Sometimes, while I’m battling my nerves and trying to remember all the right keystrokes on stage, I forget how I may be inspiring other developers, especially young women developers, to do their best work. It’s humbling to say the least.

image

imageOh and all of you asking for my .NET T-shirt, you can grab the artwork from the .NET Foundation SWAG repo as well as order stickers from sticker mule. And if you have more artistic ability than your average developer like me, submit a pull request and we’ll get you onto your own stickers too! (T-shirts coming soon. UPDATE 7/22: Order T-shirts here! http://dotnet.spreadshirt.com/).

ht-logo

What I personally found most inspiring was the app building our teams did for Humanitarian Toolbox (www.htbox.org), a charity supporting disaster relief organizations with open source software and services. I first learned about HTBOX from Bill Wagner, who’s also on the advisory council for the .NET Foundation. It was amazing to see many teams at Microsoft come together and use our tools and technologies for such an important cause. Read more about the project called “allRready” and get involved.

It’s a big day for us on the Visual Studio team. Thank you all for your support and for being awesome developers!

Enjoy!

by Beth Massi - Microsoft at July 21, 2015 02:50 AM

July 20, 2015

Rahul Desai's Blog

Cloud Certification now available for Microsoft Dynamics CRM

Long awaited capability now available…

Businesses are moving to the cloud at an increasingly rapid pace.  We’ve seen it first-hand with the popularity and market momentum of Dynamics CRM Online. Last quarter we reported that CRM Online revenue nearly doubled…more @ the link below:

Cloud Certification now available for Microsoft Dynamics CRM – Microsoft Dynamics Blog

by Rahul Desai at July 20, 2015 08:48 PM

Alex Feldstein

Rick Strahl's FoxPro and Web Connection Web Log

Clicks not working in Internet Explorer Automation from FoxPro

A few days ago, somebody posted a question on our message board mentioning that when using Internet Explorer Automation (using COM and InternetExplorer.Application) fails to automate click events in recent versions of Internet Explorer. A quick check with my own code confirmed that indeed clicks are not properly triggering when running code like the following:

o = CREATEOBJECT('InternetExplorer.Application') o.visible = .t. o.Navigate('http://west-wind.com/wconnect/webcontrols/ControlBasics.wcsx') DO WHILE o.ReadyState != 4 WAIT WINDOW "" TIMEOUT .1 ENDDO loWindow = o.document.ParentWindow ? loWindow *loWindow.execScript([alert('hello')]) oLinks = o.Document.getElementsByTagName('a') oLink = oLinks.item(0) ? oLink.href
oLink.click() && doesn’t work o.document.getElementById('txtName').value = 'Rick' oButton = o.document.getElementById('btnSubmit') ? oButton oButton.Click() &&doesn’t work

Note the link and button clicks – when this code is run with Internet Explorer 10 or later the page navigates but the clicks are never registered in the control. Now this used to work just fine in IE 9 and older, but something has clearly changed.

IE 10 – DOM Compliance comes with Changes

Internet Explorer 10 was the first version of IE that supports the standard W3C DOM model, which is different than IE’s older custom DOM implementation. If you’re working with IE COM Automation you will find there are number of small issues that have changed and that can cause major issues in applications. In Html Help Builder which extensively uses IE automation to provide HTML and Markdown editors, I ran into major issues at the time when IE was updated. There both actual DOM changes to deal with the w3C compliance, as well as some behavior changes in the actual COM interface to accessing the DOM from external applications.

The issue in this case is the latter. The problem is that IE is exposing DOM elements natively which means the DOM elements are exposed using the native JavaScript objects as COM objects. Specifically JavaScript always have at least one parameter which is the arguments array and that’s reflected in the dynamic COM interface.

JavaScript Method Calls Require a Parameter

The workaround for this is very simple – instead of calling

.Click()

you can call

.Click(.F.)

Passing the single parameter matches the COM signature and that makes it all work. Thanks to Tore Bleken who reminded me of this issue that I’ve run into myself countless times before in a few other scenarios.

So the updated code is:

o = CREATEOBJECT('InternetExplorer.Application')
o.visible = .t.
o.Navigate('http://west-wind.com/wconnect/webcontrols/ControlBasics.wcsx')
 
DO WHILE o.ReadyState != 4
   WAIT WINDOW "" TIMEOUT .1
ENDDO
  
* Target object has no id so navigate DOM to get object reference
oLinks = o.Document.getElementsByTagName('a')
oLink = oLinks.item(0)
* oLine.Click(.F.) 
 
 
o.document.getElementById('txtName').value = 'Rick'
oButton = o.document.getElementById('btnSubmit')
? oButton
oButton.Click(.F.)

The hardest part about this is to remember that sometimes this is required other times it is not – it depends on the particular implementation of the element you’re dealing with. In general if you are dealing with an actual element of the DOM this rule applies. I’ve also run into this with global functions called from FoxPro.

The rule is this: Whenever you call into the actual HTML DOM’s native interface, you need to do this. For example, if you define public functions and call them from FoxPro (o.document.parentWindow.myfunction(.F.)) you also need to ensure at least one parameter is passed. As a side note, functions have to be all lower case in order for FoxPro to be able to call them, due to FoxPro forcing COM calls to lower case and the functions being case sensitive in JavaScript. 

These are silly issues that if FoxPro were still supported would probably be fairly easy to fix. Alas, since it’s done, we’ll have to live with these oddball COM behaviors. Luckily there are reasonably easy solutions to work around some of the issues like the simple parameter trick above.

by Rick Strahl at July 20, 2015 08:00 AM

Alex Feldstein

July 19, 2015

TechSpoken

Dipping a toe into the rivers of Babble-On

Hello folks...

A very long time ago, I wrote something about getting the VFP concept of "multiple detail bands" in  SSRS.  (That's the scenario where you have two unrelated children of a parent and want to display them in the same table.)

It now occurs to me that the RDL function LookupSet was practically made to help solve this problem. (I think it came in in SSRS 2008, but it might only have appeared in R2 -- I haven't checked this.)

Suppose, for example, I want to show multiple languages along with multiple cities for a country, using my standard borrowed-from-mySql World database. Suppose I have a denormalized dataset in my report that shows countries and their related cities.  I can add another dataset for country languages, without denormalizing any further.  Now, I can write an expression like this (from the context of the first dataset, which might have a name like "CountryCities"):

 Using the lookup function to get info from a second dataset

...and add a little code function like this:

Public Function GetLanguages(ByRef Langs as Object()) As String
   Dim sb as new System.Text.StringBuilder()
   For Each o As Object in Langs
      sb.append("<li>" & o.ToString() & "</li>")
   Next
   Return "<ul>" & sb.ToString() & "</ul>"
End Function

... and that's all I need. As you can see I've gussied it up somewhat here by adding some HTML placeholder formatting (HTML placeholders were not available before 2008 for sure) so that my "details" can be put into a single textbox but are neatly formatted as a list. Still, it's precious little code.


 

What do you think?

Looking up even further

I have rarely used the trio of SSRS Lookup functions.  Most of the time, it seems to me that I should do most of the joining myself, in SQL queries, before bringing the data into the RDL for arrangement. Seems like perf would be better that way, although I have never tested this assumption and now am somewhat more motivated to do so.

Goodness me. It certainly is nice to be back in RDL-territory....

July 19, 2015 10:28 PM

Shedding Some Light

An “IT Miracle” and a lot of luck is not the best plan!

This is a true story of a day in the life of several software developers (one who proudly and regularly declares #IHATEHardware) and a hardware/networking professional, and one of our customers who will of course remain anonymous for obvious reasons. That said, I share this story of lessons learned and reinforced in hopes that this happens to no one else and that it encourages you to help others protect their data assets so they are not taken to the edge of losing their business.

My days normally start out around 8:00am because most mornings I like to sleep until something naturally wakes me up. Most days it is construction noise in the neighborhood, my wife’s alarm, or the dog, but on July 16th it was a phone call from Frank Perez who is one of my team mates at White Light Computing. It was a very early 6:15am. I was waking up out of a dream where I was in a stadium of people and there was an earthquake happening (probably something in the 5.0 range, which was kind of cool). In my dream my phone was ringing too. Surprisingly, I answered it and it was Frank who started talking about the details of an investigation he was conducting based on a slew of error reports overnight from one of our customers. Normally the error reports are related to the network failing, which is reported to the customer’s IT Director. But the error reports started early and were “not a table” errors. Frank connected to the server were the data was located and tried to open up the tables in the error reports. They failed to open up. Upon further inspection he found them encrypted, and in the folder also he found two files:

1) How_to_decrypt.GIF
2) How_to_decrypt.HTML

(Note: the instructions in the two files are not the same. The HTML made me quite nervous as it could have active content. I do not advise opening up this file in the wild just to be extra safe.)

Frank suspected that someone opened up and unknowingly installed Cryptolocker or one of the variants. This is the second time in a few weeks Frank has seen this at a customer site, but at a different customer (who literally had no backups). Based on the time stamps, Frank was guessing it started between 8:00 and 8:15pm, the night before. So it has been running for 10 hours. My experience and the research I have done on Cryptolocker was that it isolated itself to the computer it was installed on. This is the first time I’ve heard it jumping from a workstation to the server. The day was going downhill quickly.

Here is an image of the How_to_decrypt.GIF:

A kings ransom

Something you never want to see on your computer!

(I’ve blurred out a couple of things in case it will identify our customer)

This was not how I was expecting to start my Thursday. I formulated a plan to contact key people and then head into their office with Frank. I talked to the owner of the company who I learned was out of town and a couple of time zones away. I talked with the IT Director who was away on vacation to get the low down on the backups and where they were. I know that without the data people are going to be doing a lot of manual work, and most of the workers won’t even be able to do their jobs. Awesome news: a backup of the server is taken at 5:00 each day. Sounds like we might only be missing a few hours of data and the workers who are working between 5:00 and 8:00 are using the apps with SQL Server and not the DBFs so things are really sounding like it might not be as bad as I originally considered.

For those who have not been introduced, Cryptolocker (aka Cryptowall, CryptoOrbit, and Cryptolocker 3.0) is ransomware and it is not fun at all. I have seen this too many times in the past couple of years at customer sites. Although it behaves like one, this “software” is not a virus; it is a root kit that establishes itself on the computer. It installs itself via socially engineered email attachments that can fool even the savviest of computer user who know better. The software installs via a link from the Internet. It then calls home to get a key and begins to encrypt files with predefined extensions, which started out as MS Office extensions, but it has been expanded (oddly, INI and XML are not on the list). Unfortunately Visual FoxPro data files fall into the list. The process encrypts the files one folder at a time. The first variant of this software stuck to the local computer. So if someone opened the attachment and followed the link only one computer was affected. Still, for some of our customers, this can be bad enough depending on the computer that gets hit. But this latest variant now hits mapped drives so files on a server or another computer in a peer-to-peer network can join in on the fun. And the performance is very impressive as it had all the files in the data folder on the server encrypted in less than 20 minutes.

I learned Thursday from someone who recently tested six of the most common anti-virus and malware programs, not a single one found it on an infected machine. The day gets worse.

There are two ways to get your files back: restore from backup or pay the ransom and decrypt the files using the key returned from those holding them hostage. If you have good backups, it might not be too bad depending on the timing of the backups. I was thinking it would not be a problem as there are daily backups and we had the most recent a few hours before the attack.

So back to the 7:00am hour, I’ve contacted a couple of people on my team who helps support this customer, the key players at the customer site and headed into the office.

Once at the office we met with the newest member of the team who is the new hardware/networking tech for our customer. Frank explained his findings and our hypothesis. The tech has recent experience with the newest variant of Cryptolocker, confirmed Franks conclusion, and gives us the low down on what has happened, how this ransomware works, and what we need to do.

Developing the plan of attack:

  1. Disconnect each computer from the network in case of propagation. Kill the wireless so no laptops and other devices could connect to the network.
  2. Search each computer for ransom files starting in the room that was working around 8:00 last night to find the computer that is doing the encryption (“patient zero”) .
  3. Remove the computer from the room.
  4. Verify problem really is what we hypothesized.
  5. Determine the damage on the workstation and the server.
  6. Step back and develop the recovery plan

The approach, the collaboration, the planning, and the implementation of the plan reminded me of how firemen approach a fire. If you follow a fire truck to a fire you are likely to witness something that at first seems disturbing. The truck stops and the fireman get out. They are not running around. They are methodically executing a plan, which to the common person might seem to be working at a slower pace than is needed to get the fire out. As the fire rages in the building, the fireman get their gear, strap on an air tank, they put ladders up and get on the roof, they pull the hoses off the truck, they attach to the fire hydrant, put on their air masks, some start cutting holes in the roof and others start throwing water on the fire. Often the fire is out in short order. It is because of the planning and training, and implementation of the plan that things work so well. This is how we worked to find the troubled computer and determine how to get the customers back to work.

Finding the machine that installed Cryptolocker turned out to be simple as all we had to do is search for the file names above on the C: drive, and possibly other drives on the computer. In this office there are close to 50 computers, so the tasks took a little time with three of us unplugging and searching. We found the troubled computer pretty quick. Murphy’s Law would have dictated locating it show up on the 50th computer, but instead it was one of the first.

The fact is: we considered paying the ransom to get the server back to normal. The people cost involved to rebuild the server and restore the files was much more than the ransom. Obviously one has to understand the ramifications of giving money to the criminals. But what if it was necessary? I’ve talked to several of our customers who have been hit and several other colleagues who have customers, who have been bitten, and sometime the backups are not good enough and the money needs to be paid to stay in business. It is these kinds of moral dilemmas that can keep one up at night.

We started looking into it and really thought through the process to the point of getting a spare laptop and potentially sacrificing a MIFI device to get to the hacker’s Web site and instructions. We did not really know if something that connects would get infected and to the potential affects it can have on the hardware used. Even the thought of searching and connecting to something like the FBI site in search for keys was scary to me. Who knows what fake sites could be setup. We also have read and heard that Cryptolocker can get installed just by visiting a URL. So we did not take any chances. Before we got started, we realized that the ransom note stated a 1 to 10 day turnaround on getting the data back. We were not sure if this meant 10 days to get us the key, or 10 days for the solution to decrypt all the files it encrypted. Additionally, the ransom required bitcoin as payment, and getting bitcoin currency was new to all three of us. So we left that as the last resort option and moved forward with the better plan.

Second plan of the day:

  1. Determine the ransom and steps to pay it (last resort).
  2. Update the customer on the situation and explain the ransom, and what we need to do. Get permission to pay the ransom as a last resort.
  3. Build a new virtual server to replace the virtual server with the encrypted files. We wanted to leave the old server intact in case something was important in the restore of the new server.
  4. Restore backup from previous day to the new server
  5. Reconnect the workstations to the network, and test the systems
  6. Get home in time for dinner (not really in the plan, but if all went well…)

Rebuilding the server was not my thing (remember #IHATEHardware), but Frank and the networking tech don’t mind and get started. The IT Director has the Windows Server ISO and keys staged for us to use. Hyper-V and the ISO make short work of getting the server operating system installed. But low and behold the keys do not work. It turns out the server is R2 and we have keys for something else. We look for the proper ISOs and key combinations. We found a stash of DVDs with different versions. Several hours later, we download the proper ISO to match the existing virtual server and get it installed. Still enough time to get the backup restored and everyone home for dinner.

The backup is restored. We poke around and see quite a few files missing including DBFs, CDXs, FPTs, EXEs, DLLs. Some folders have all the data in the data folder, but are missing the EXEs in the application folder. Some folders have the EXEs, but are missing the runtime files. There was no obvious pattern.

The network tech dug into the backup software and came upon a revelation we restored a differential backup. Ah, perfect, so we have more work to piece the restore back together. First we have to find the last full backup and then restore the differentials after restoring the full. More work, but an easy enough plan of action. Our customer has four solid state drives rotated as the backups (fifth daily is on order to replace previous fifth one), each capable of holding 680GBs. Fortunately, earlier in the day our customer’s onsite developer requested the Controller bring the offsite drives back to the office in case they were needed. Perfect, a plan was working. Then the new networking tech delivered news that was about as devastating as Frank’s original find of Cryptolocker. The the last 16 days backups were ALL differentials. He could not find the last full backup.

I placed a call to the owner to explain the situation, and a second one to the IT Director who explained where to find the full backup. Unfortunately what he pointed us to was the differential backup we used. You could feel the room deflate. As you can ascertain, we effectively have no backup. Holy cow. My stress level just raised up a notch. Earlier in the day the IT Director told the owner there were three options:

A) Restore the backup
B) Pay the ransom
C) Pack it up and go out of business

Going back in time…

Many years ago when we needed a test data set we would ask the previous IT Director and she would give it to us a day or two later since she had to restore from tape. The restoration process was a pain in the neck and resource intensive. So to help us out I asked Frank to develop a rudimentary backup process to run nightly at midnight. This process copied key files to a folder on one of the computers that is not the server. It was never intended as a full backup or part of the disaster recovery process. From time-to-time the old IT Director would recover files we backed up because it was quicker than the restore from tape. We benefited from this by grabbing the backup for our test machine.

One of our contractors happened to be in the office on Tuesday and grabbed a copy of the data from Monday night’s backup for some testing he needed to work on. He does this every so often when he is in the office, but he is not there every day and has been known to take long vacations. Earlier in the day I asked him to secure that backup just in case it was needed, but not expecting to ever need it.

A few years ago I requested a test machine to create an isolated environment for the customer to test our application changes. The owner has so much faith in us that he prefers to test in production. We know better and never have that level of trust in ourselves. After many requests and some serious push back and flack from the current IT Director, we got a test machine, which is a different VM in Hyper-V. The last major testing we did was last August. But at that time we had refreshed the entire VM from production.

Back to solving the problem…we knew we had more options than the IT Director.

  • Restore the backup
  • Rebuild the backup from Tuesday, restore previous night, and leverage the test machine.
  • Pay the ransom
  • Start with a baseline from last August from test machine
  • Absolutely no talk of going out of business, yet

Our biggest concern was that our backup from the night before was taken four hours after the encryption process started. But one thing the Cryptolocker cannot do is encrypt files that are open. It just so happens one or more people left an application or two open and had some very important files open. Mind you, corporate policy states the employees close all the apps before leaving for the day. So, because someone violated corporate policy, our backup was able to back up some really important files. Sure, these files would have been on the nightly backup from 7 hours earlier, but we had even fresher data.

We ended up implementing plan B and it worked. We restored the Tuesday backup. We restored the previous night backup and we restored our midnight backup. Still, 77 DBFs were not restored. We used Beyond Compare to help determine the missing files (thank you Scooter Software for the best file/folder comparison software around). It turns out that many of the tables were static, some temporary, and some could be rebuilt or ignored completely. We used Beyond Compare to move over the missing files from the test machine to the production server. The three of us then grabbed the remaining files like the latest EXEs and runtime files from our machines to fill in the gaps.

Sure, it is not perfect as some of the data was from August of last year, but we know that we have all the key things covered and the core data is the latest and greatest.

I texted the owner the good news and told him I would be in the office before they opened for business on Friday. We left at 10:30pm.

Friday had a few glitches here and there (mostly because we missed some of the Visual FoxPro Reporting APP files) and a couple of machines that relied on the wireless access could not be used until we checked out all the laptops that were coming in from the satallite workers. The only machines affected were patient zero and the file server.

Lessons to reinforce/learn:

  1. Backup, backup, backup,
  2. Full backups are better than differential
  3. Differential backups rely on a full backup.
  4. Test the backups
  5. Have multiple generations of backups
  6. Multiple kinds of backups (daily, weekly, monthly)
  7. Multiple storage methods for backups (disk, mobile disks, offsite and onsite, cloud)
  8. Review the processes and the disaster recovery plan periodically.
  9. Refresh the test machine with production on a more regular basis.

It pays to be lucky

We absolutely lucked out this week. We lucked out because our contractor was in the office on Tuesday and grabbed a backup. He easily could have been on vacation like so many people this time of year. We lucked out because we solved a pain point years ago to create this backup in the first place. We lucked out that Frank and the new network tech had some recent experience with Cryptolocker. We lucked out the network tech is very bright and works well with the development team (IT Support and developers do not always get along in my experience). We lucked out we have a test machine that had the rest of the files. We lucked out that one or more employees violated corporate policy and had the apps open, which normally gives you fits trying to back up file. We lucked out our backup process has the intelligence to back up open files. We lucked out that our customer had faith in us. We lucked out that we could deliver a working data set. Our customer lucked out that he is back in business so quickly.

I mentioned that our customer had faith in us. He told me on Friday that his IT Director did not think we would be able to fix this. His daughter, who works in IT at a local community college, did not think we would be able to pull the Phoenix from the ashes. I explained to our customer, from time-to-time during my career we have relied on pulling off an “IT Miracle” and each of us are limited to the number of miracles we can pull off. This past week I used up another one. Yes, there were other options, but each of those options is not as good as the ones higher up on the list and each of the other ones had higher costs to the business and long-term ramifications. And one of the options meant giving money to criminals, which is a decision you cannot put a price tag.

The real sad thing about this is there is no protection from it happening again. In fact, it could have easily had more than one computer attacked. The same email could have been opened by more than one person. The same email could arrive tomorrow at the office, and is certainly being delivered each day to other people around the globe as you read this post.

Thanks for taking the time to read our story of how one company went to the brink of disaster and survived to talk about it. I hope the lessons learned and lessons reinforced trigger action on your part to review the disaster recovery plan. If there is no plan, I hope you take the time to make one. Also, take the time to discuss this with your customers. Leave no one behind.

To the entity in charge of my count of “IT Miracles”, please grant upon me double the count I have remaining today. I’m certain this won’t be the last time I need to count on one.

Thanks to everyone who helped out that day. The teamwork was amazing! I never have to be reminded of how great a team we have at White Light Computing. Last Thursday the team shined brightly. We also have a great customer and a new found friend (the networking tech) who I look forward to working with for many years to come.

by Rick Schummer at July 19, 2015 08:42 PM

Alex Feldstein

July 18, 2015

Rick Strahl's Web Log

The Rise of JavaScript Frameworks - Part 1: Today

frameworkWhen it comes to Web development, JavaScript frameworks have moved front and center in the mainstream in the last year and a half or so. When looking at building modern Web applications, the bar has been raised significantly by what is possible in large part due to the more accessible mainstream frameworks that are available today to build rich client and mobile Web applications. Although full featured end to end front end JavaScript frameworks have been around for quite a bit longer than just the last couple of years, it seems in the last year and half they really established themselves in the Web developer mainstream with extremely wide ranging uptake that happened very quickly. Clearly these JavaScript frameworks have a hit a nerve with the developer mainstream, scratching an itch that developers have wanted to scratch for some time, but didn’t quite have the tools to do so easily. Frameworks have filled that niche and caused a lot of developers that previously avoided complex JavaScript development to jump in head first.

In this post I describe my thoughts on how we’ve arrived here and why I think that frameworks are the new baseline that we will work and build on top of in the future. This post talks in the context of the current crop of the various frameworks that I call the V1 round that are based on current crop of shipping technologies and EcmaScript 5. In Part 2 I’ll talk the V2 round that describes the new versions that framework providers are working on and that take advantage of the latest and greatest technologies built around EcmaScript 6, new and more complex build systems and a general refactoring of what we’ve learned from the V1 round. While the V2 round looks to bring many improvements, none of these frameworks are released yet and are barely even beyond the prototype stage. In some ways these updated frameworks use a much more  complex eco-system which affects app integration and getting started. I’ll tackle that touchy subject in the Part 2 post.

Fast Adoption of Frameworks

It's amazing to me how quickly JavaScript frameworks like AngularJS and Ember and recently also ReactJs (which technically isn’t a framework) and even commercial frameworks like KendoUI and Wijmo have caught on and have permeated into to JavaScript developer mainstream. There are also a host of JavaScript based mobile frameworks like Ionic, Onsen Ui, Telerik’s Application Platform and NativeScript that are very mobile centric and based on complex frameworks as well.

Traditionally JavaScript components and libraries have had a lengthy uptake curve when it comes to the mainstream developers. I’m not talking about the bleeding edge developers here, but rather about the typical developer in the mainstream building business applications who typically picks tools and sticks with the technology for some time.

Advertisement

Framework uptake for the latter has been very quick and wide and that has been a big surprise. The last time that there was a huge spike like this was when jQuery started gaining serious momentum in the late 2000’s to the point that almost 90% of all Web sites were using jQuery. Frameworks haven’t quite reached that level yet and the spread is not as unipolar as jQuery, but at the rate framework adoption is going things are heading that way.

JavaScript frameworks have raised the bar so much that I think it’s safe to say, that a framework of some type has now become the new baseline for JavaScript development of rich client applications. In the not so distant future you may still use jQuery (or fully native JavaScript) style development for single pages or little page helpers, but as far as full client side application development goes, frameworks are going to become the norm if they haven’t already done so.

Some of the most popular frameworks in use today with the current crop of framework technology are:

For mobile frameworks there are

Several of the Mobile frameworks – namely Ionic, Onsen and KendoUI also work in combination with AngularJs or are built directly ontop of AngularJS. There’s a lot of choice out there at the moment with more choices. Currently AngularJs and derived frameworks are easily the most popular among developers, but with the V2 round of frameworks on the horizon that could very well change.

The current crop of frameworks succeed because they:

  • provide a coherent framework model for development 
  • provide a module system that allows for code separation
  • provide for easy, declarative data binding
  • allow you to create components
  • provide for URL  based and declarative routing
  • provide the support features nearly every application needs
    • form validation
    • HTTP services
    • animation
    • intra application messaging
    • event management

These may seem pretty obvious now, but if you think back a few years these were all very difficult problems to tackle individually and even more difficult to manage collectively in an application.

This is where frameworks shine – they can integrate these features in a comprehensive way that is consistent and more seamless than individual components would be. On the downside you have to buy into the frameworks development model and mindset, but overall the benefits of a coherent whole far outweigh the pieced together model.

Why now?

Full blown client frameworks have really hit a nerve, solving a problem that needed solving for a long time. For years we have built client side applications without a plan it seems and it’s really surprising in retrospect that we didn’t end up here much sooner.

Patterns

In the past there wasn’t much guidance on how to build large client side applications, which often resulted in mountains of jQuery (or raw JavaScript) spaghetti code. While many of us managed this process successfully, it was also very ugly for most and involved continually learning and doing a lot of trial and error to find what worked… and what didn’t.  I speak from experience when I say that I really hate looking at 3-5 year old client application code I wrote and trying to decipher what the heck I did back then. The code definitely was not as clean as I would want it to be even though at the time of building it I thought of it as following some of the good concepts and best practices I’d arrived at.

It wasn’t for the lack of trying to make things maintainable either, but somehow the nature of applications that were built using jQuery and a host of support libraries, with manual data binding and event hookups just always ended up being very messy no matter how hard I tried to organize that code. Raise your hand if you were also in this boat… I expect to see a lot of hands! Those of you that had the foresight and skill to not end up there – congratulations your are the proud few…

Not only did code often end up getting tangled very easily but it was also daunting for many developers to jump in, because in the old way there wasn’t much in the way of structure or guidance. Getting started involved mostly starting with a blank screen and then letting you figure out everything from structure to library choice to code patterns to manage application logic. How do you set up your JavaScript properly? How do you manage large code files? How do you break up complex logic? How do you split large code pieces up into separate code files and load them effectively? These are all things that are very unique to JavaScript – in other languages compilers or official build system provide some structure in terms of pulling all the pieces together and providing some mechanism for modularity as part of a UI framework for a coherent whole. JavaScript and HTML don’t have such a thing natively.

JavaScript frameworks address these issues by providing guidance in the form of a somewhat rigid pattern implementations required to lay out an application. Most frameworks provide ways to modularize code and break complex code into smaller, more testable and more maintainable modules using a pre-described syntax. While there is some ceremony involved with this today, it does provide consistent structure to modules that make it easy to separate code and understand what you are looking at.

I can admit that code modularity was probably the biggest hindrance for me in the past when building complex applications. Today you don’t need a framework for this – you can use any of the many module systems (AMD, CommonJs, system.js etc.) independently of a full framework, but frameworks abstract all of this away even further into their own integrated module systems that combine both the module loaders and other features like dependency injection in a single step. ES6’s native module system should make all of this much easier and more consistent in the V2 round for frameworks but also for those that stick with plain JavaScript.

Data Binding and User Interface

Without a framework you also have to deal with UI and application issues like how do you consistently manage assigning and reading data out of the DOM with manual data binding.  There are literally dozens of ways that you can do this and often you do end up using a few different ways of doing it in an application. The fact that there’s no built in UI framework in HTML/JavaScript applications is somewhat unique and we’ve had to struggle with this since the begging of the Web.

Most other development platforms have built in support for user interface and data binding abstractions. Think about a desktop framework like WinForms or WPF or Visual Basic for that matter – in those frameworks you don’t have to worry about how the various pages are strung together or how code is loaded or how data is bound to controls – the base framework handle all that for you. In JavaScript and HTML this is not the case, so inherently there were always a million choices to make and lots of up front learning involved to pick the pattern du jour – which seems to be changing every month or so.

It’s not surprising that in those days many developers were turned off by complex JavaScript development and decided to just not go there – or at least not go the full client centric SPA application route. It is difficult to manage complex applications without some bedrock foundation and a base blueprint especially if you are new and starting from scratch. Even if you end up reading up you are likely to get confused by all the choices available.

Although there were a few solutions out there at the time – Backbone came around in those early years – those solutions tended to be esoteric and also very low level with a whole new level of complexity added on top of the existing mess. To me the very early frameworks seemed to make things more difficult rather than ease the process of building complex client side logic which is why I built my own subset that made sense to and addressed the specific problems I had to solve.

In the years preceding the current framework round I had built my own mini framework that provided base services and features I use everywhere. Some of it wasn’t optimal and while it all worked, it took constant maintenance to keep it up to do date, tweak it and deal with minor little incompatibilities amongst browsers and various other libraries. While it helped me tremendously in understanding how a lot of the underlying technologies worked, it really wasn’t anywhere near the best use of my time to screw around with this low level stuff. And I know I wasn’t the only one – nearly any JavaScript dev who was doing anything reasonably sophisticated was in a same boat building their own micro-libraries of utilities and helpers to perform many common tasks. Parallel development of the worst kind…

You might have mitigated some of this by using and combining multiple JavaScript libraries but that too had risks – integration issues and style differences and learning this or that library out of context and then dealing with the overhead of pulling in many large dependencies for a small subset of features you’d actually use. And after all that you then move to a different client and all of that learned stuff goes out the window because they’re using a difference set of customized tools.

For me and my tools it worked well enough, but it was a major pain to build and maintain that code. It’s not a process I want to repeat…

Frameworks Blitz

But all of that started to change with the advent of more capable and much more comprehensive frameworks that started arriving on the JavaScript scene a few years back.

My journey with frameworks started about 3  years ago and it took me a while to pick and choose something that worked for me. More so I was waiting out the initial pain of these then new’ish JavaScript frameworks getting over their 0. blues.

Early Days

Backbone was the earliest of these frameworks that attempted to provide a common development patter for building complex applications. When it arrived it made quite a stir by providing a relatively small blue print for how you can package application logic to build more complex applications. Personally I never got into Backbone because at the time it seemed quite complex and low level. At the time I didn’t quite get it yet because it seemed that in a lot of ways it took more code to write stuff that I was already writing which seemed a step back. But Backbone did provide the first step in providing a common and repeatable project structure that in retrospect makes a lot of sense, but didn’t really show its value until you got into building fairly complex applications.

Growing up

A couple of years later Angular and Ember started showing promise. I remember watching a demo of both frameworks on a conference video and although the frameworks looked very rough at the time, I immediately got excited because it was much closer to what I would expect of a full featured framework that provides enough functionality as to replace my own micro framework. The value over what I had cobbled together myself was immediately obvious to me, and right then and there I knew that my micro-framework development days were done. It was always meant to be a hold over until proper frameworks arrived to me, but it just took a lot longer before worthwhile contenders actually arrived on the scene.

I kept watching progress on the frameworks for a while before I tried out both frameworks and eventually started creating some internal applications with AngularJs. While both Angular and Ember have many things that are not exactly intuitive or obvious, both of these frameworks address most of the key features I mentioned earlier. The key is that they provide an end to end development solution that should be more familiar to developers coming from just about any other development stack.

Huge Productivity

Using Angular was able to build a few medium sized mobile apps in a ridiculously small amount of time compared to how long it took me to do the same thing with my old home grown toolkit. I ported over a couple of small apps from my old stack to the new and the amount of code reduced into less than a quarter of the original code. The amount of time to build the same app from scratch roughly took a third of the time which included learning the framework along the way. In terms of productivity the improvements were quite dramatic and the resulting application was much more functional to boot as new features were added. The real bonus though was letting the app sit for a few months and coming back to it and not feeling like I had to rediscover my own code all over again. Because of the modularization it was reasonably easy to jump right back in and add bug fixes and enhancements.

Modularity and Databinding are the Key

When it really comes down to it to me the two biggest reasons for productivity for me were the ability to easily modularize my code and for having a non-imperative way for data-binding. Being able to create small focused modules to back a display view or component, and the ability to describe your data as part of a model rather than manually assigning values to controls is a huge productivity win. Both of those were possible before of course (module systems abound today, and data-binding libraries like Knockout were around before frameworks started to rise), but the frameworks managed to consolidate these features plus a host of other support infrastructure into a coherent and consistent whole.

It’s not all Unicorns and Rainbows

unicornsandrainbowsIt’s still possible to shoot yourself in the foot when you do something stupid like binding too much data into a form or using inefficient binding expressions or filters, or creating inefficient watches.  I’ve run into funky databinding issues where updated model values fail to update the view or where model updates fire watch expressions in recursive loops. Stuff does go wrong and then requires some sleuthing into the framework.

Sometimes you have to fight the framework when you’re doing things slightly different than it wants you to do things. But in my experience that is rather rare, and when it does happen I can always fall back to low level JavaScript at that point and manipulate the DOM manually. The way I see it you’re not giving up very much – you still have all the low level control even if that is often frowned upon in the framework guidelines.

Frameworks have brought in a ton of new developers to JavaScript including a lot of them who know very little about the esoteric nature of how JavaScript works. Lets face it – JavaScript is one func’ed up language, but it is what we’re stuck with in the browser – I’ve given up fighting it and try to make the best of it by understanding as much of its esoteric nature as possible although my rational mind struggles with many of the illogical language ‘features’. For developers inexperienced with JavaScript it can be difficult to understand where the seam is between framework and underlying language and JavaScript’s funky behavior makes it easy to get into trouble when you don’t know the language quirks. The best advice I have to developers new to JavaScript is spent some time reading up on the core features of JavaScript. Still the best and most concise book to start with is still Douglas Crockford’s JavaScript: The Good Parts. And then spent a few hours coding through some of the scenarios mentioned in the book. Understanding closures and variable scoping, floating point number behaviors and the DOM event loop are probably the most relevant issues you have to understand when working with these frameworks.

JavaScript frameworks bring to mind the old adage: With great power comes great Responsibility and when you give very powerful tooling to developers who may not understand the underlying principles or even core components of the JavaScript language, it’s easy to end up with complex applications that are badly engineered and look like steaming piles of spaghetti code.  Functional – yes. Maintainable – not so much. But compared to managing the complexity without a framework the level of spaghetti-ness is actually more manageable because at least there are isolated piles of spaghetti code in separate modules. Maybe that’s progress too…

Much of this is due to the fact that the current crop of frameworks – while very powerful are very much 1.0 versions. Their first implementations of a general vision and the developers of these frameworks initially focused on making things possible rather than making them easy or maintainable. The latter concepts have come much later and while improvements have been made to improve maintainability, performance  and usability in many cases a lot of the improvements have been bolted on. Version 2.0 of most of these frameworks which are under construction, are all from the ground up re-writes that aim to fix many of these early implementation issues. Whether that’s the case we’ll have to wait and see (and I’ll talk about this topic in Part 2).

What took us so long?

If I had to summarize why this wave of frameworks has been so successful I’d argue it’s because they’ve provided us with a base blueprint for how to structure an application as well as providing a powerful and easy way to handle data binding. That has been one of the huge missing pieces that has made JavaScript development of anything non-trivial in the past such a pain.

In retrospect it really seems crazy that it’s taken us this long to get to this point. Navigation, data binding, form validation, event management, http services, messaging – those are all things that any even moderately sophisticated application needs, so why the heck were we constantly reinventing the wheel over and over again each in our own individual ways?

It’s amazing that we’ve come this far in client side Web development and have made due without a base framework layer. Most other UI platforms provide you a base framework. Just think about tools like Visual Basic and  FoxPro in the old Win32 days, WinForms and WPF on Windows, Cocoa on the Mac – all of these provide base frameworks to build applications along with build systems that know how to bundle stuff so you can run things. You don’t worry about how to modularize your code, or handle databinding – it’s part of the system itself.

JavaScript is more like a raw language like C# or Java, but one that never had a proper UI framework to go along with it. Would you build your own UI framework in C#, C++ or Java? I doubt it – but that’s exactly what we’ve done with JavaScript in the Browser until recently. A UI framework and general organizational abstraction layer is generally accepted as a standard part of a development platform and JavaScript has just never had that until recently.

The advent of more capable – and also bigger – JavaScript frameworks has brought a renewed interest in JavaScript development. Frameworks have caused a lot of developers that were previously deferent of JavaScript to jump in head first.

I’ve been surprised to see uptake of frameworks – especially AngularJS – in companies that previously were ferociously anti-JavaScript. I’ve also seen relatively inexperienced JavaScript developers able to build fairly complex and very functional applications with these frameworks. I work with quite a few developers who are very slow to adopt new technologies, but quite a few of those who have ignored a lot of other technology trends in the past, are all of a sudden very gung-ho and are jumping in with both feet into JavaScript frameworks and producing good results.

It’s not hard to see why: Client side application development in the browser has been on everybody’s radar for a long time all the way back from the early days when IE first rolled out the XHR object and DHTML forms in the late 90’s (which all other browser vendors snubbed at until 10 years later). It’s something that most developers can clearly identify with but it’s something that’s been really difficult to do right for a long time.

JavaScript frameworks provide a much easier point of entry to build rich client Web and mobile applications and that is a good thing.

Open Source is a Key Feature

open-source ideasJavaScript frameworks abstract some of the hard-won experiences about what works and what doesn’t when it comes to DOM manipulation, JavaScript quirks and best practices regarding performant JavaScript.  Much of this information is hard won from the experience of thousands of users that use the code and often report as well as fix bugs. The fact that all of the big frameworks are open source and developed by a large number of developers is no accident – it’s a necessity to deal with the complexity that is involved in managing so many different browser execution environments that we still have to run in. It’s made it possible to take advantage of the group mind to build better solutions. So many more people can be involved in this process of reporting and also fixing issues that simply transcends what a single developer or even a private or corporate entity accomplish.

These projects uniquely benefit from the open source development model and it’s a key component to the success of these frameworks.

Too much, too big?

There are those that decry JavaScript frameworks as bloated, do-too-much pieces of software and the truth is you can do all of these things yourself today either by writing your own or piecing together various components to provide similar functionality. It’s easier today than it was 5 years ago, as lots of new libraries have sprung up to provide support for key features that you see embedded in frameworks today.

It’s a viable option to build your own micro-framework, but the problem is that it takes a much higher level of JavaScript developer to work in this space and even then I’m not sure that you would build something that is as capable and certainly not as competitive. As developers we should strive for a sense of unity, not rampant individualism so that code is more portable and more understandable across applications. This approach might still make sense to a small subset of developers, but for the mainstream I cannot point to any serious downsides of using a framework.

I also find the size argument dubious at best. Most sophisticated applications use a ton of functionality that these frameworks provide. And while their script footprint is not small, if you were to piece together even half of the feature set from other more specific libraries you’re just as likely to use the same or bigger script size footprint. Yes there’s a price of admission but at the same time it’s worth it. For example as of Angular 1.4 the size of the minified and compressed gzip file is 45k which is hardly a deal breaker in my book. Anybody who complains about that – especially after subtracting whatever size a custom set of libraries would take – is micro optimizing in the wrong place. Granted you may still have to add a couple of additional small support libraries for routing and miscellaneous specific add-on features, but the overall footprint is still anything but large for the amount of value you get from roughly the size of a medium sized image.

The argument that building your own tools and frameworks helps you learn more about the stack you work on certainly is a valid one. I’ve followed that approach for much my developer life, but I’m finding it’s getting too damn difficult to keep up with the changes in technology and especially in JavaScript where things are changing too fast to keep up with. The level of complexity of what we’re trying to build these days has also ratcheted up considerably from what we built just a few years ago and it’s difficult to build that level of complexity from scratch. The latest rounds of upheavals – leading to ES6 and all the new build system technologies are making my head spin. If you’re component or library developer you have to keep up with all of this in order to keep your code compatible.

The way forward is to be a part of something bigger and contribute rather than to reinvent the wheel in a different way yet again.

We’re not going back

There's clearly a lot of demand to build rich client side applications on the Web and all of these frameworks address the sweet spot of providing enough structure and support features to make it possible to build complex applications with relative ease.

onewayTo me it’s obvious that the days of low level JavaScript for front end applications for the average developer are numbered and baseline frameworks are going to be the future for the majority of Web developers that need to get stuff done. The productivity gain and the fact that the frameworks encapsulate hard-won knowledge and experience about quirks and performance in the DOM and JavaScript, make it impractical to ‘roll your own’ any longer and stay competitive in the process.

As time goes on these frameworks are only going to get more sophisticated and provide more functionality which will become the new baseline of what is expected for typical Web applications. As is often the case technology ends up sorting itself out and building on top of the previous generation to extend new functionality.  We’re building a tower of Babel here and we’re already seeing that happening with the next generation of several of these frameworks like AngularJS 2.0, Ember 2.0 and Aurelia all of which are seriously overturning the apple cart by moving into the latest technologies involving  EcmaScript 6 language and build system features. We’re in for a rough ride for this next iteration of frameworks.

But – we’ll leave that discussion for Part 2. In the next part I’ll address the complexities that I see with the next generation of these JavaScript frameworks that attempt to bridge a whole new slew of JavaScript standards and functionality along with new tooling and to help build us the next generation of sophisticated client side applications.

Stay tuned…

© Rick Strahl, West Wind Technologies, 2005-2015
Posted in Opinion  HTML5  Angular  JavaScript  

by Rick Strahl at July 18, 2015 06:11 PM

FoxProWiki

DFPUG

Editor comments: Whitespace


DFPUG is short for "Deutschsprachige (German-speaking) Foxpro User Group". Initial groundwork was done by Juergen Wondzinski (aka wOOdy) and Gerhard Paulus in 1992, and then founded in 1993 by Rainer Becker. It has about 700 members - companies as well as programmers. It organizes the yearly Devcon Germany as well as regional monthly meetings in about a dozen cities in Germany and publishes the quarterly loose leaf magazine Foxx Professional (200 pages). Due to the size of the user group it is incorporated under the name ISYS GmbH.

Main online offers are a German-language FoxPro forum at forum.dfpug.de. It's the only free German-language discussion site for all VFP and other related products! See our webpage at www.dfpug.de for a large amount of published material about Visual FoxPro or the small English homepage www.dfpug.com . Additionaly we maintain a small German Wiki. The newest and by now largest offer is our new document search portal build with Microsoft Sharepoint portal server - details found at Deutschsprachige Foxpro User Group Portal.

The DFPUG is distributor for Hentzenwerke Publishing and offers all book titles via a small Webshop as well as per mailorder via E-Mail. The DFPUG is also the new owner and distributor of the framework Visual Extend.

The newest service of DFPUG is the localization of the IDE of Visual FoxPro into several languages in cooperation with Ken Levy with the Resource Localization Toolkit from Microsoft. Localized User Interface available for German, French, Czech at Deutschsprachige Foxpro User Group Portal for Visual FoxPro 8.0 and 9.0. Newest versions for SP1 and upcoming SP2 are available as donation ware at the international webshop.

July 18, 2015 05:38 PM

Articles

Ubuntu Jam at the University of Mauritius

Operating systems are simply tools to do a job...

And therefore, I have to admit that even though I use Microsoft Windows on a daily base to earn my living, I'm also using Linux since almost two decades on various machines. Together with different types of virtualisation I actually do not care whether an OS is running on bare-metal or inside a virtual machine. And given the computing power of recent machines it's not a question after all anymore. Given this little insight, let's directly hop into the Ubuntu Jam event from February 2015.

Saturday is usually the time the children are on tour with me and so why not take them to the University of Mauritius and have some fun together. Also, they know quite a number of folks of the Linux User Group of Mauritius, too. When we arrived at the campus it was actually simple to get a proper parking - just speak to the security guys around POWA, they are actually very friendly and willing to help. ;-)

Next, we had to look for those Linux geeks and penguins... Near the cafeteria they said, as if I know where the cafeteria is. Frankly, it was on our direct way to ask a group of students. Even though they gave us a strange but curious look, they were really glad to help and we managed to be around in time. Well, even too early... Anyway, enough time to get our gear in place. Even though that my dear son was more busy with his Nintendo DS than a Linux-driven laptop but hey that's absolutely fine. He's already geeky enough. Actually, later on - I don't know he managed it - he was gaming on someone else's Android smartphone.

Disclaimer: I won't be accountable for any hacks and root kit installations on your device that he's going to do!

So better keep your smartphone under your control. Anyway, it seems that the phone owner and my son had a good time checking out some gaming apps. This gave me a bit of liberty to show my older laptop running on Xubuntu 14.10, to answer a couple of Xfce4 related questions and to advertise the Developers Conference. Yes, I keep a git clone on that machine, too - actually running on different TCP ports on Apache and nginx simultaneously. Geeky style... 

Lots of hardware and software during the Ubuntu Jam - and the choice of tools covered a wide range...
Lots of hardware and software during the Ubuntu Jam - and the choice of tools covered a wide range...

Despite some light spray of rain, we had a great time during the Ubuntu Jam at the University of Mauritius (UoM)
Despite some light spray of rain, we had a great time during the Ubuntu Jam at the University of Mauritius (UoM)

Thanks to the vicinity of the UoM cafeteria it was a no-brainer to just get inside and grab some drinks and food for the lunch-break. Quite surprisingly, they also offer power drinks and other selections. Now, again well fed and still ambitious to handle Linux questions, I managed to get some exchange with Ish, Nirvan, Nadim, Pritvi and others regarding the organisation and ideas for the DevCon. Even though that there was a slight spray of rain, it seems that we all had a good time on the campus and I'm looking forward to attend the next Linux Jam - maybe then on openSUSE Leap or other distributions.

by Jochen Kirstaetter (jochen@kirstaetter.name) at July 18, 2015 01:25 PM

Alex Feldstein

July 17, 2015

Alex Feldstein

July 16, 2015

ERROR: Tried to load source page, but remote server reported "503 Service Unavailable".

July 16, 2015 10:37 PM

Rick Strahl's Web Log

Multiple Desktops in Windows

I spent the last month and a half using a Mac and OSx, running both OSx and Windows and while doing that one thing I really appreciated was the use of multiple desktops that OSx supports. It's been especially useful when running parallels which you can set up in such a way that it runs the Windows instance on a separate desktop which is convenient.

I've since switched back to Windows and I have to plead ignorance: I didn't know that Windows has had support for multiple desktops for some time. Multiple desktop support actually harks back all the way to Windows XP, but the operating system didn't officially expose this functionality. However there are a number of utilities out there that you can use to take advantage of multiple desktops – in a limited fashion today.

Windows 10 – Official Multiple Desktop Support

Even better though is that Windows 10 will native support for multiple desktops.  Windows 10 officially adds multiple desktops as a feature as part of a host of new desktop manager features that can be managed through the Windows UI as well as with convenient hotkeys. Hopefully they'll also add support for touch or mouse pad gestures so that you can swipe to new desktops as you can on OSx, but currently I don't see support for that (touch pad vendors would have to provide the gesture mapping support I suppose – then again given how crappy most Windows machine touch pads are maybe that's not such a good idea – my Dell XPS touch is the worst piece of crap I've ever used, amazing that manufacturers can't get such a simple device right).

Anyway, in Windows 10 you can use a number of shortcut keys to manipulate and manage multiple desktops:

Alt-Tab: Bring up the Task View which includes a new Add Desktop option this view also shows you all of your open desktops on the bottom.

desktops

Alt-Ctrl-Left/Right Arrow: Rotate through the active desktops. You can use these key combos, or use Windows-Tab and then select the desktop of choice interactively as shown in the screenshot above.

Moving Windows between desktops: You can also move windows between desktops by simply dragging them from the task view on the active desktop onto another desktop on the bottom of the task list. There’s also a shortcut on the task view to move windows to another desktop. When you close a desktop with active windows the windows are moved to the desktop on the left.

Advertisement

How useful is this?

I tend to run 2 or 3 monitors (depending on whether I'm on Maui or here in the 'remote office' in Oregon) and then set up 3 desktops:

  • Main Desktop
    This is my main desktop where I do most of my work and get stuff done – mostly development work, business stuff, writing, browsing for research etc.
  • Supplementary Desktop: Media, Email, Twitter, Social Browsing etc.
    I like to set up a separate desktop to keep all the things that I leave open for a long time and get them off my main desktop to make the main desktop less cluttered. If I run music using a music player I really don't want to see Pandora or the Amazon Music player on my desktop. Same goes for email. Gmail or Outlook is always open but I don't want it in my way while I'm working on stuff. For one thing it's a little less distracting – notifications that pop up, pop up on the secondary desktop. Likewise with my Twitter client. Having all that 'distracting' stuff on a second screen keeps the distractions to a minimum. I have to explicitly check over there to get distracted on purpose :-)
  • Web Debug Desktop
    During development I prefer to have all my Web related stuff running on a separate desktop. Typically this means running Chrome with a separate DevTools Window each taking up their own screen in a multi-monitor setup, which makes it very easy to see things happening. By having only the things I need running in this setup it's much easier to see what's going on. Other things I run in this desktop is any test agents and other tools I use to access requests like WebSurge for URL testing of APIs etc. The nice thing is that development and the running application are separated only by the switch desktop key and I can get a much cleaner clutter free view to play with this. It does take some getting used to pressing Windows-Ctrl-RightArrow instead of  Alt-Tabbing to the browser and the dev tools, but that'll happen with time.

What’s missing

The obvious thing missing is that you can’t persist your desktops. You can open a new desktop and move things onto it, but there’s no way that I can see to actually persist anything on that desktop so next time you boot that set up comes back.

Still it’s nice to just be able to ‘spread out’ while the machine is running. With reboots becoming a rare thing, having desktops persist for the lifetime of your Windows session might be all you need anyway.

Third party solutions serve that particular need today and I expect there will be third party solutions crop up for Windows 10 that will also extend that functionality with more permanent desktops and configurations for each desktop such as backgrounds, icons displayed and so on.

Multiple Desktops on older Versions of Windows

Multiple desktops have actually been supported in Windows since Windows XP, but there's not been any official UI built into Windows to create or access those desktops. However there are third party tools you can use to create and manage desktops. The most popular is:

Desktops from Systernals

In typical Systernals tradition, it's a small self-contained utility that provides the core features you need. Desktops is a small tray icon application that allows you manage up to 4 desktops.

When you click on the taskbar icon you get four squares, each of which represents a potential desktop to create:

Desktops[4]

You can then switch desktops by using the pop up view above, or by using a set of hotkeys you can configure as part of the options. Desktops is pretty bare bones. It doesn't have support for closing desktops and you can't move things around, but its simplicity and small size make it a good choice for desktop management.

There are a host of other tools that let you create virtual desktops but most don't actually use this 'hidden' windows feature but rather create their own separate desktops to display and manage. The nice thing about this simple, but basic utility is that it's small and lightweight and works with what's in the Windows box.

Summary

I've only used the new desktop features in Windows 10 for a few days now but I've already found them to be pretty damn useful to keep clutter and distractions to a minimum, especially when coding. So if this is new to you in Windows, it might be worth checking it out. I'm glad to see that this feature has become an officially supported feature in Windows 10.

© Rick Strahl, West Wind Technologies, 2005-2015

by Rick Strahl at July 16, 2015 06:03 AM

Alex Feldstein

Photo of the Day


"Bluemanity"
Airbus A320

by Alex Feldstein (noreply@blogger.com) at July 16, 2015 05:00 AM

July 15, 2015

VisualFoxProWiki

VFPSetClasslib

SET CLASSLIB opens one or more class libraries to provide access to a collection of classes. Please discuss the pros/cons to having a one-class-per-classlib (VCX) design versus a multiple-class-per-classlib design.

Please sign your posts even if you have to sign with a pseudonym. It makes it easier to be polite to you. Tom Cerul
  • One-class-per-classlib:
    • Pros:
      • The Class Browser has less clutter when moving from a child class to a parent class, only the parent class is shown. -- Mike Yearwood
      • The Modify Class dialog box takes one less click. It defaults to the first class in the vcx. -- Mike Yearwood
      • One developer per class, not classlib - read team productivity improvements.
      • Only include the classes you need into an EXE.
        I believe this would be a huge benefit to maintaining an existing system. There would only be certain classes in the project, making it far easier to see actually used classes, instead of seeing every class in a framework. -- Mike Yearwood
      • Safer if a classlib gets corrupted
      • Easier to share classes with others, without giving away too much. -- Mike Yearwood
      • Less duplication of virtually identical or similar functionality-classes/methods by sharing classes. IOW if I have a class with a method to compute # weekdays between two dates and I send my class to to you and you already have a compute # weekdays between two dates functions, your system ends up with two such functions. -- Mike Yearwood Steven Black or someone created a Class Browser addin that will copy a single class and all its superclasses from one lib to another. -- Bob Archer
        I don't understand this. What has this to do with one vs. multiple-class-per-classlib. You send me your MikeYearwood_DateThings.vcx with on class and I add it to my project. Will my KurtGrassl_DateThings.vcx disappear? No, I have two such functions, regardless whether my KurtGrassl_DateThings-class has its own vcx or not.
      • - To clarify - if I give you a red lego block, you can see that it duplicates an existing red lego block. Otherwise you'll have to go digging in your lego bags to find if you already have one. You're not supposed to have duplicate functionality. http://c2.com/cgi/wiki?OnceAndOnlyOnce and http://en.wikipedia.org/wiki/One_and_only_one. -- Mike Yearwood
      • Never have to refactor to move a class from one multi-class classlib to a new common classlib for sharing with another project. Another productivity improvement. -- Mike Yearwood
      • Takes less time to compile one small classlib - which is beneficial while writing code versus recompiling a large classlib of arbitrarily grouped classes. -- Mike Yearwood
    • Cons:
      • If there is an error in a class in a vcx, the project manager only reports the classlib name. The more classes, the harder it is to find the error. -- Mike Yearwood
      • The IDE occasionally packs the vcx, which is noticeable delay when saving with a big vcx. -- Mike Yearwood
      • Makes the resulting exe larger because each VCX is added to the exe. However, this is offset by having huge numbers of unused classes added to simple exes. -- Mike Yearwood
      • Results in far more classlibs in the project manager. However, if they are all named properly it will not be hard to find a particular class. Imagine you need a ctrBall class. If it is contained in a ctrBall.vcx, you already know exactly which library to open. -- Mike Yearwood I think it was Steven Black that wrote a little utility called mc() (short for modiclass I assume) that opens a class in the class editor. It uses metadata produced by a form called vcxlist. I use mc() all the time for quite a while now. Not sure where I got it, some book or article or something. -- Bob Archer
      • Significant impact on VSS-Project Manager integrated environments. Of course I argue it's best not to integrate VSS and the project manager. -- Mike Yearwood I think it was also Steven Black who wrote an article on using the Component Gallery rather than the Project Manager. This exposes classes rather than libs. -- Bob Archer
        Hi Bob - That does help somewhat, but it does not reduce the exe size. You still end up adding un-needed classes to the exe. -- Mike Yearwood
      • The maximum number of characters in a macro-substituted variable is still 8,192. Code like x=SET([CLASSLIB]), SET CLASSLIB TO ..., SET CLASSLIB TO &x is much more likely to choke if the number of class libraries grows exponentially like it would if class libraries held a single file. Workaround that is to SET CLASSLIB TO ADDITIVE as each class is createobjected and you need not remember and reset at all. -- Mike Yearwood
      • One developer per classlib, not per class UNLESS source control is involved. Source control merging is not perfect. - On an effectively managed project that shouldn't be the case. Here's how effective management fails. We have to get this bug fix out before noon. Oops. Sorry manager, because we built everything into massive lumps, only one programmer can work on that part and there's too many things involved to allow us to meet the deadline. -- Mike Yearwood. This is not a problem anymore. Now you can do concurrent programming with FoxPro binaries in 2 or more branches using Fox Bin 2 Prg, merge the changes and rebuild the binaries. -- Fernando DBozzo
      • There's a demonstrable performance penalty for multiple SET CLASSLIB commands.
      • Object instantiation is slowed based on the number of files that must be searched. That only matters at design time. In the exe there is no set of files to be searched. -- Mike Yearwood

  • Multiple-class-per-classlib:
    • Pros:
      • Fewer files to have around
        Is this really a benefit? -- Mike Yearwood Yes. Only because VFP does not have something for classes like the .PRG, which can be a module unto itself without the need for additional physical containers. -- Mike Yearwood
      • Fewer SET CLASSLIB TO commands required
        . This may be a significant point! -- Mike Yearwood A neat trick, which the Visual FoxExpress framework uses is to build a table of all the VCX and PRG files in the project and put them into a .dbf that is included in the project. Then in your main run a function that automates the SET CLASSLIB and SET PROCEDURE calls. -- Bob Archer Hi Bob. That still requires running many SET CLASSLIB commands. I don't use any SET PROCEDURE calls, but I must use SET CLASSLIB calls, because I have no choice! -- Mike Yearwood
      • Potentially easier to find classes
        . Is this because there would be more classlibs to open? I think this is a documentation issue. In fact, you'd be looking through a long list of classes to find a particular one. -- Mike Yearwood The length of the list of classes is the same. It's the length of the list of class libraries that would potentially change.
      • Grouping/packaging of classes that belong together. Hey, what are libraries for? What defines what makes classes belong together? Do all base classes belong in one classlib? -- Mike Yearwood Some say form, others say function, others say module. It's something that should be agreed upon by a project team. In general, any type of logical grouping of components helps with organization. What's logical to one will not be logical to another. -- Mike Yearwood That's obviously not necessarily true, and in most cases false. A class library named Customer that contains customer forms and related components would make sense to pretty much any developer immediately. Too often I see a single classlib that contains completely unrelated things. In most cases the logical grouping does not exist. -- Mike Yearwood
      • Only one developer at a time can work on any of the classes in a class library, helping to maintain and promote consistency throughout the library
        If so, then how can any group of people work together? Surely the resulting inconsistency would make for something hideous? -- Mike Yearwood.
      • Easier distribution of a society of classes. During runtime there is no distribution. The design of classes should be focused on use and reuse of the classes at runtime. If you have a class that has proper loose connections to other classes in other libraries, you will still have to find a way to distribute them. So just put all classes in one VCX and there you go all your distribution problems are gone. -- Mike Yearwood
      • Easier identification of a society of classes.
      • Easier to understand how another developer viewed class collaborations. The class collaborations at design time are unlikely matched by the physical arrangment in the classlibs. -- Mike Yearwood Really, so if someone had a class library named something like Explorer and it had in it a suclass of the treeview, listview and imagelist it wouldn't make sense to you? Now we're getting somewhere! I understand your confusion! I don't mean a single row in a VCX at all. Your explorer classlib can have subcomponents, classes that have been subclassed to support the explorer class are totally valid. That's just like having an SCX with a bunch of contained controls. However, the scx itself is the module! That's perfectly acceptable to me. It is not valid to create a new treeview that could be used outside of the explorer class and house it inside the explorer vcx. Because then I have no easy way to find that treeview. You can subclass the VFP treeview, add multi-application-wide functionality to it, store it in it's own classlib. Subclass that treeview and add Explorer-specific functionality and store it in the explorer classlib. The explorer-specific treeview is not a class anymore. It is not intended to be subclassed for anything else. You would intend to subclass the explorer. -- Mike Yearwood
    • Cons:
      • Only one developer at a time can work on any of the classes without source control - which is imperfect.
        This is not a problem anymore. Now you can do concurrent programming with FoxPro binaries in 2 or more branches using Fox Bin 2 Prg, merge the changes and rebuild the binaries. -- Fernando DBozzo
      • No chance of classes which contain other classes of being separated from it's 'children', if in the same classlib
        I don't understand this :-( -- Kurt If a set of classes are components of a single class in the vcx, like the added components of an SCX are in the scx, then the contained classes don't get separated from the parent. -- Mike Yearwood
      • The EXE build 'pulls in' all classes in the class library (VCX) even if you only use one of them in your program, potentially bloating the EXE.
      • If the classlib gets corrupted, any changes since your last backup may have to be restored, damaging many classes at once.
      • Source Control has to do much more work to integrate changes from multiple developers. At times, it will fail, requiring manual intervention. Who wants to do more work? -- Mike Yearwood
        #NEW This is only true when source control is used improperly or circumvented. . Source control can be tricky to setup and can fail. Nothing is perfect.
      • If a class in one multi-class classlib ends up being needed in another app, it will require refactoring to move this class to a new common classlib. -- Mike Yearwood - this is only true if there's demonstrable and significant harm from having the unused classes in the other project.
This is clearly more of a matter of personal preference than anything else.
NONE of the arguments above are persuasive in the least for either case. Some are erroneous from my point of view.
There are instances in life, and in VFP, where one way of approaching something is equally "good" as another and in such cases personal preference will rule. -- Jim Nelson

July 15, 2015 06:57 PM

Alex Feldstein

Sandstorm's Blog (Home of ssClasses)

xBox, How To..

xBox' Usage, Explained Further

xBox is among my most complicated classes to date due to the fact that I have given this three (3) possible usage.  And a new user is getting confused with it that I decided to rush here explanations on its features as I believe not everyone can easily see, despite my examples shown in codes on the sample form, how the class' different features can be utilized to each developer's advantage.

Take a look at this image below where the class is using the Combo Feature (QuickFill + Popup Grid)


But then, the class is designed to be used as just plain quickfill, plain popup/dropdown grid or that combo.  Explanations follow:


=====

Plain QuickFill/AutoComplete Feature - is the above image without the popup/dropdown grid.  This is where the class gives suggestions on the next possible match based on what you have typed so far by highlighting that suggestion

Quickfill means it will fill the possible remaining values for you.  Like in this image above, I just typed A and it results to that.  Where the highlighted text is the suggestion based on nearest possible matches.

Quickfill will be in place and fire if you use these properties:
  • AutoCompSource = The source table/cursor for searching autocomplete/quickfill
  • AutoCompField = the source field of the source table/cursor.  This is what gets returned to the class' Value property
  • ExtraField = is the field of the source table/cursor of secondary possible return value (aside from AutoCompField)
  • ExtraValue = mandates the type of value to be returned (is it character, numeric, etc).  If it is numeric, you have to put 0 there.  Later I will adjust the class to make this automatic dependent on the type of ExtraField field


One thing you need to realize with Quickfill in place is the class will utilize either SEEK() or LOCATE basing whether AutoCompField is indexed or not.  That means your record pointer on the underlying source table changes each time as you type as it will position that to the nearest match.

Another is that with Quickfill in place, the class manipulates the mouse insertion point via SelStart and the way to clear up the previous entry is via double-clicking on it or calling its _Clear() method.

Again, Plain QuickFill does not have the popup grid.

====

Plain Popup Grid (xBoxGrid subclass) feature is again that above image without the highlighted suggestions.  The class performs less manipulations that way and is good for fast searching. It does not manipulate SelStart so it is just like an ordinary textbox with that extra dropdown near matches inside a grid.  For you to use this feature, you have to do it in codes.   xBoxGrid is affected by 3 events, i.e.,

Init - Where we can set the popup grid, you need to include DODEFAULT() then set the grid via calling _SetGrid() method.  Let us take a look into these codes I have responsible for generating the above image:

* Init     
DODEFAULT()

This._SetGrid('Asset#-60|Registration-60|Asset Type-250',2,,150,2,'Calibri',10,.T.)

Lparameters cColumnList, nScrollBar, nWidth, nHeight, nGridLines, cFontName, nFontSize, lNoTitleBar

Where:

cColumnList - is for the column header captions and width.  It can be broken into 2 things:

  • Pipe Symbol - To break the Column Header Captions, e.g. Asset|Registration|Asset Type
  • Dash Symbol - To instruct by code the width of the column, e.g., Asset-60 or Column 1's width is 60.  Default is 80.

While these will be bound by the class later to the fields of the resultant cursor based on the field sequence on SQL SELECT which is done in InteractiveChange event (explained later), you have to realize what you will type here is for the columns' headers only and column widths.  And so you can put space if you like such as in Asset Type.

nScrollBar - is for scrollbar property of grid or 2 in this case or just vertical scroll

nWidth - is Grid Width or in example, 580

nHeight - is Grid Max Height or is 200 here.  This is max height I set on the above image but is not fixed as popup grid will auto-adjust to smaller based on number of results on the cursor.

nGridLines - is Grid Lines or 2 in this case

cFontName - The font to be used on the grid

nFontSize - The size of the font to be used on the grid

and finally lnoTitleBar - To remove or retain the titlebar of the popup form

Do not get confused with _SetGrid() method.  It is just meant to set up the appearance of the dropdown grid.  Readying it for the resultant cursor later via _SearchGrid() method.

Actual cursor creation based on what has been typed so far is done here:

* InteractiveChange    
Local lcSQL, lcRetVal, lcAssetNo
TEXT TO lcSQL NOSHOW PRETEXT 12
Select t1.assetno, t1.regno, t2.xDescript, t1.assetpk, t1.assignfk From assets t1 ;
      LEFT Outer Join adescript t2;
      ON t2.descpk  = t1.descfk;
      where [MySearch] $ t1.assetno OR [MySearch] $ t1.regno OR [MySearch] $ t2.xdescript;
      order By 1 ;
      into Cursor junkasset nofilter
ENDTEXT    
lcSQL = STRTRAN(m.lcSQL,';','')

This._searchgrid(m.lcSQL,'junkasset.assetno','junkasset.assetpk')

While I do it that way, you guys can put the SQL SELECT straight onto the first parameter ignoring local variable usage, your choice.  Only that the SQL SELECT needs to be of character type, meaning you have to enclose the whole SQL SELECT with quote, double-quote or bracket.  For readability's sake, I myself prefer the above local variable approach.

The above means we are asking it to fire _SearchGrid() method and perform a query (m.lcSQL) on every changes we do (keystrokes) on its textbox section or to search and perform filtering sort of as we type on the resultant cursor.  It only has three (3) parameters:

Lparameters cSQL, vValue, vExtraValue

Where:

cSQL = is the SQL SELECT we want.  In the example above, it is based on local Variable lcSQL

vValue = is the primary value which is what gets returned to the textbox portion. It is a field of your choosing in the resultant cursor. Above, it means we want to return the assetno field record from junkasset cursor

vExtraValue = is the secondary "invisible" value that the class can return, in the example above, assetpk which is an AutoInc Primary Key on my junkasset cursor


Navigation:

Once the popup grid appears, you can easily navigate onto that either via clicking onto it or pressing the down arrow.  Any of those two will set the focus onto the popup grid.

Once the popup grid receives focus, it will automatically return the value of the rows in highlight or ActiveRow back to the textbox section.  To finalize selection is either:

  1. Click outside of the popup grid
  2. Double-click on the selected row
  3. Press Enter key on the active row 


Another Event that can be used here is ProgrammaticChange, where I do manipulations on the other objects of the form based on the return value from the popup grid selection.  Its usage is something like this:

* ProgrammaticChange   
With Thisform
      ._driverpk = junkasset.assignfk
      ._assetpk = This.ExtraValue
      ._assetno = Alltrim(This.Value)
      ._showimage()

Endwith

Where I transferred the needed values on my form properties so those will be visible to the entire form and call a method that shows the image of the asset plus other things.

ProgrammaticChange is where we can implement immediate actions based on the activerow of the popup grid as we navigate to those records either via arrow keys or mouse click.  So as we move through the records of that popup grid, we can update other objects in the form as well

====

Combo Usage (Both QuickFill and Popup Grid in place)

The image above is the combo approach.  While this works good on my end, as I have said, I am not that confident yet that this will give you what you want.  So to avoid unknown problem, I suggest for you not to use this yet.  Or if you want, use it and if something is wrong, inform me.

====


Things for your consideration in selecting the usage type:

Quickfill - uses an actual table or cursor and performs either SEEK() or LOCATE.  Therefore record pointer changes as we type.  It requires manipulation of mouse insertion point via SelStart and storing of what you have typed so far; that to ready it for the next entry properly, you have to either double-click on it for it to fire _Clear() method, or you call the _clear() method  yourself.

Plain Popup Grid - creates a cursor that is being filtered as you type based on your WHERE condtion.  [MySearch] is mandatory there as the class looks for it and replaces it with what you have typed so far in the textbox section.  It creates a cursor that gets rebuilt over and over again filtering it with your WHERE condition as you type.

Keep on checking this post as I will slowly update this with other more properties and features of the class.  This will be the help documenation on the proper usage of the class.

P.S.

A bug I realized only later is popup grid won't work with ShowWindow=2 or Desktop = .T..  Will check that one out one time.

This is it for now.  Cheers!




by Jun Tangunan (noreply@blogger.com) at July 15, 2015 01:09 AM

July 14, 2015

Alex Feldstein

July 13, 2015

FoxProWiki

FoxRockXIssues

The 44th regular issue of Fox RockX is available:

May 2015 - Number 44
01 Deep Dive: Add Gauges to Your Applications by Doug Hennig
08 Know How: Using OVER with analytic functions, Part 1 by Tamar Granor, Ph D
13 Future: What's in That Data Set? by Whil Hentzen
17 VFPX: VFPX: Update Core VFP by Rick Schummer


March 2015 - Number 43
01 Deep Dive: New UI Classes From Carlos Alloatti by Doug Hennig
06 Know How: More on OVER by Tamar Granor, Ph D
11 Know How: Do custom replacements with Go Fish by Tamar Granor, Ph D
13 Future: How Big Is That System? by Whil Hentzen
16 Future: Integrating Visual FoxPro and MailChimp - Part 5/2 by Whil Hentzen


January 2015 - Number 42
01 Deep Dive: The Latest Techniques in Deploying VFP Applications, Part 3 by Doug Hennig
05 Know How: Combining Query Results by Tamar Granor, Ph D
08 VFPX: Hacking IntelliSense by Rick Schummer
15 Future: Integrating Visual FoxPro and MailChimp - Part 5/1 by Whil Hentzen
21 Information: The Fox RockX Portal by Whil Hentzen


November 2014 - Number 41
01 VFPX: Fox Bin 2 Prg by Rick Schummer
10 Know How: One-Step Insert and Update by Tamar Granor, Ph D
14 Deep Dive: The Latest Techniques in Deploying VFP Applications, Part 2 by Doug Hennig
18 Future: Integrating Visual FoxPro and MailChimp - Part 4 by Whil Hentzen


September 2014 - Number 40
01 VFPX: Fox Unit 4 by Rick Schummer
08 Know How: Summarizing aggregated data, Part 2 by Tamar Granor, Ph D
14 Deep Dive: The Latest Techniques in Deploying VFP Applications, Part 1 by Doug Hennig
19 Future: Integrating Visual FoxPro and MailChimp - Part 3 by Whil Hentzen


July 2014 - Number 39
01 Future: Integrating Word's Spellcheck to Your VFP Application by Whil Hentzen
14 Know How: Summarizing aggregated data, Part 1 by Tamar Granor, Ph D
20 Future: Integrating Visual FoxPro and MailChimp - Part 2 by Whil Hentzen


May 2014 - Number 38
01 Deep Dive: Unit Testing VFP Applications, Part 3 by Doug Hennig
08 VFPX: ExcelXML by Rick Schummer
14 Know How: Getting the Top N for each Group by Tamar Granor, Ph D
20 Future: Integrating Visual FoxPro and MailChimp - Part 1 by Whil Hentzen


March 2014 - Number 37
01 Deep Dive: Unit Testing VFP Applications, Part 2 by Doug Hennig
06 VFPX: Foxy XLS by Rick Schummer
10 Know How: Handling hierarchical data by Tamar Granor, Ph D
16 Future: Data Munging with Python, Part 2/2 by Whil Hentzen
19 Future: Data Munging with Python, Part 3 by Whil Hentzen


January 2014 - Number 36

01 Deep Dive: Unit Testing VFP Applications, Part 1 by Doug Hennig
06 VFPX: Thor's Finder by Rick Schummer
13 Know How: VFP: Consolidate data from a field into a list by Tamar Granor, Ph D
18 Future: Data Munging with Python, Part 2/1 by Whil Hentzen


November 2013 - Number 35

01 Deep Dive: Introduction to C# for VFP Developers, Part 5 by Doug Hennig
08 Know How: VFP: Ideal for Tools, Part 3 by Tamar Granor, Ph D
14 Basics: A Bevy of Timers by Whil Hentzen
19 Future: Data Munging with Python, Part 1 by Whil Hentzen


September 2013 - Number 34

01 Deep Dive: Introduction to C# for VFP Developers, Part 4 by Doug Hennig
08 Know How: VFP: Ideal for Tools, Part 2 by Tamar Granor, Ph D
16 VFPX: FoxyPreviewer by Rick Schummer


July 2013 - Number 33

01 Deep Dive: Introduction to C# for VFP Developers, Part 3 by Doug Hennig
08 SQLite: Case Study: Using SQLite to break the 2GB Barrier by Whil Hentzen
17 Know How: VFP: Ideal for Tools, Part 1 by Tamar Granor, Ph D
22 VFP: Application Updater by Rick Schummer


May 2013 - Number 32

01 Deep Dive: Introduction to C# for VFP Developers, Part 2 by Doug Hennig
06 Know How: Give Thor Tool Options by Tamar Granor, Ph D
12 VFPX: Dynamic Forms by Rick Schummer
21 Basics: Setting up VFP 9 by Whil Hentzen


March 2013 - Number 31

01 Deep Dive: Introduction to C# for VFP Developers, Part 1 by Doug Hennig
07 Know How: Make Thor Your Own by Tamar Granor, Ph D
13 Future: The Business Case for Upgrading Apps to Visual FoxPro in 2013, Part 2 by Whil Hentzen
19 VFPX: FoxBarcodeQR by Rick Schummer


January 2013 - Number 30

01 Deep Dive: Call .NET Code from VFP the Easy Way by Doug Hennig
06 Know How: Try Thors Terrific Tools, Part 2 by Tamar Granor, Ph D
13 Deep Dive: Another Boring Article About Regular Expressions X by Whil Hentzen
18 SQLite: The Business Case for Upgrading Apps to Visual FoxPro in 2013, Part 1 by Whil Hentzen


November 2012 - Number 29

01 Know How: Try Thors Terrific Tools, Part 1 by Tamar Granor, Ph D
08 Deep Dive: Creating ActiveX Controls for VFP using .Net, Part 4 by Doug Hennig
12 VFPX: Intellisense X by Rick Schummer
19 SQLite: Vive La Difference How SQLite varies from VFP SQL by Whil Hentzen


September 2012 - Number 28

01 Editorial: The Business Case for Upgrading Apps to Visual FoxPro in 2013 by Whil Hentzen
05 Know How: Using Assign methods by Tamar Granor, Ph D
09 Deep Dive: Creating ActiveX Controls for VFP using .Net, Part 3 by Doug Hennig
14 VFPX: Data Explorer 3 by Rick Schummer
20 New Ways: OS Based Invisible Data Compression in VFP by Pradip Acharya


July 2012 - Number 27

01 Editorial: Learn, Network, Be inspired by Rick Schummer
02 Know How: Put Access methods to work by Tamar Granor, Ph D
06 Deep Dive: Creating ActiveX Controls for VFP using .Net, Part 2 by Doug Hennig
11 SQLite: Inserting Large Amounts of Data into SQLite by Whil Hentzen
14 VFPX: VFP 9 SP2 Help File by Rick Schummer
18 Silverlight: Creating Dependency Properties and Understanding DP-concepts by Patrick Sch䲥r
23 Tips & Tricks: Cool tool for reporting Problems by Tamar Granor, PhD


May 2012 - Number 26

01 Know How: Put Event Binding to Work, Part 2 by Tamar Granor, Ph D
07 Deep Dive: Creating ActiveX Controls for VFP using .Net, Part 1 by Doug Hennig
12 SQLite: SQLite Connection: Error Handling and Verification by Whil Hentzen
19 VFPX: Fox Barcode by Rick Schummer


March 2012 - Number 25

01 Know How: Put Event Binding to Work, Part 1 by Tamar Granor, Ph D
08 Deep Dive: Make Your Menus Pop by Doug Hennig
13 VFPX: Go Fish 4 by Rick Schummer
21 SQLite: Getting started with Client - Server with SQLite by Whil Hentzen


January 2012 - Number 24

01 New Ways: Managing Properties as Virtual Table Fields by Pradip Acharya
02 Deep Dive: The ctl32 Library, Part 3 by Doug Hennig
06 Know How: Speed Up Your SQL Code by Tamar Granor, Ph D
09 VFPX: Parallel Fox by Rick Schummer


November 2011 - Number 23

01 New Ways: Foxparse C Library for Handling Strings, Properties and Windows by Pradip Acharya


September 2011 - Number 22

01 Editorial: Totally Marshmallowed? Join us at SWFox DevCon for a refresher! by Rainer Becker
02 Deep Dive: The ctl32 Library, Part 2 by Doug Hennig
07 Know How: Make Your Queries Fly by Tamar Granor, Ph D
11 VFPX: Thor Adding Tools by Rick Schummer
17 Tips & Tricks: Schummer Tips and Tricks by Rick Schummer
19 Tips & Tricks: Report Writer by Cathy Knight
21 Internationalization: Internationalize Your App, Part 1 Entering international characters by Rainer Becker


July 2011 - Number 21

01 Editorial: It's show time again by Rainer Becker
02 Deep Dive: The ctl32 Library, Part 1 by Doug Hennig
07 Know How: Talking to Microsoft Office by Tamar Granor, Ph D
11 Customerizing: Customizing Your Vertical Market Application, Part IV by Cathy Pountney
17 Silverlight: Applications and the local System by Michael Niethammer
21 VFPX: Thor Introduction by Rick Schummer


May 2011 - Number 20

01 Deep Dive: Email and File Transfer the Fast (and Cheap!) Way by Doug Hennig
06 Know How: Build Your Own Project Tools by Tamar Granor, Ph D
13 Customerizing: Customizing Your Vertical Market Application, Part III by Cathy Pountney
17 Tools: dFPUG.fll Version 3 - Zip, Scan and more by Venelina Jordanova and Uwe Habermann and Erich Todt


March 2011 - Number 19

01 Deep Dive: Encryption the Fast (and Cheap!) Way by Doug Hennig
06 Know How: Inside the Object Inspector by Tamar Granor, Ph D
14 Customerizing: Customizing Your Vertical Market Application, Part II by Cathy Pountney
19 VFPX: Vista (and Windows 7) Dialogs via COMtool by Rick Schummer


January 2011 - Number 18

01 Deep Dive: Compression the Fast (and Cheap!) Way by Doug Hennig
06 Know How: Introducing the Object and Collection Inspector by Tamar Granor, Ph D
09 Customerizing: Customizing Your Vertical Market Application, Part I by Cathy Pountney
13 Silverlight: Lightswitch - a first look at the Beta of the new RAD tool by Michael Niethammer
20 Tools: Application String Handling Made Easy with Foxparse C library by Pradip Acharya


November 2010 - Number 17

01 Silverlight: Silverlight Business Applications by Venelina Jordanova and Uwe Habermann.
10 Deep Dive: A More Flexible Report Designer by Doug Hennig.
18 Know How: Understanding Business Objects, Part III by Tamar Granor, Ph D


September 2010 - Number 16

01 Editorial: Rescue in sight with Silverlight by Rainer Becker
02 VFPX: zProc IntelliSense by Rick Schummer
07 Deep Dive: Practical Uses for GDIPlusX, Part III by Doug Hennig
11 Know How: Understanding Business Objects, Part II by Tamar Granor, Ph D
20 Silverlight: SL Data-Binding and Data-Validation by Michael Niethammer


July 2010 - Number 15

01 Silverlight: Silverlight for VFP Developers by Venelina Jordanova and Uwe Habermann
09 VFPX: Code References by Rick Schummer
14 Deep Dive: Practical Uses for GDIPlusX, Part II by Doug Hennig
19 Know How: Understanding Business Objects, Part I by Tamar Granor, Ph D


May 2010 - Number 14

01 Editorial: The Visual FoxPro Roadshow 2010 by Rainer Becker
02 VFPX: OOP Menus by Rick Schummer
08 Deep Dive: Practical Uses for GDIPlusX, Part I by Doug Hennig
13 New Ways: Extending the Toolbox by Tamar Granor, Ph D
18 New Ways: Dating with DBI by Toni Feltman


April 2010 - Free special German issue about ADS
German edition, sponsored by Sybase

01 Advantage Database Server fuer Visual FoxPro Entwickler von Ken Levy


March 2010 - Number 13

01 Editorial: Visual FoxPro Stack Overflow by Ken Levy
02 VFPX: ProjectHookX by Rick Schummer
06 Deep Dive: Introduction to GDIPlusX, Part III by Doug Hennig
11 New Ways: OOP + Metadata = Flexibility by Tamar Granor, Ph D
15 New Ways: Paying it Forward by Toni Feltman


February 2010 - Free special issue about ADS
sponsored by Sybase

01 Advantage Database Server for Visual FoxPro Developers by Ken Levy


January 2010 - Number 12

01 Editorial: Get on the VFPX Bandwagon by Rick Schummer
02 VFPX: SCCText X by Rick Schummer
06 Deep Dive: Introduction to GDIPlusX, Part II by Doug Hennig
13 New Ways: Take adventage of SQL improvements by Tamar Granor, Ph D
17 New Ways: Where�s the Beef? by Jim Booth Offsite link to http://www.jamesbooth.com
19 New Ways: String.Format for VFP by Eric Selje
22 VUProjectTools: Updating project files from the source control management by Uwe Habermann and Venelina Jordanova


December 2009 - Free special issue about VFP.NET

01 VFP.NET by Boudewijn Lutgerink


November 2009 - Number 11

01 Editorial: The history of VFP by Ken Levy (also available in French)
02 VFPX: Control Renamer by Rick Schummer
07 Deep Dive: Introduction to GDIPlusX, Part I by Doug Hennig
13 New Ways: Collections instead of Arrays by Tamar Granor, Ph D
17 Best Practices: Best Practices Part VI by Jim Booth Offsite link to http://www.jamesbooth.com
19 VUProjectTools: Beauty Studio by Uwe Habermann and Venelina Jordanova


September 2009 - Number 10

01 Editorial: New kids on the block by Rainer Becker
02 VFPX: Code Analyst by Rick Schummer
09 Deep Dive: Custom UI Controls:SFCombo Tree by Doug Hennig
14 New Ways: The Right Keys are Primary by Tamar Granor, Ph D
18 New Ways: Test Driven Development, After the Fact, Part II by Eric Selje
21 New Ways: ActiveLabel Class - CmdButton Substitute for Forms with the New Look by Pradip Acharya


July 2009 - Number 9

01 Editorial: All You Can Eat! by Rainer Becker
02 VFPX: Tabbing Navigation by Rick Schummer
06 Deep Dive: Custom UI Controls:Splitter by Doug Hennig
10 Best Practices: Best Practices Part V by Jim Booth Offsite link to http://www.jamesbooth.com
13 New Ways: Use the Toolbox! by Tamar Granor, Ph D
20 New Ways: Test Driven Development,After the Fact, Part I by Eric Selje


May 2009 - Number 8

01 Editorial: VFP 9 SP2 News by Rick Schummer
02 VFPX: PEM-Editor by Rick Schummer
10 Deep Dive: Creating Explorer Interfaces in Visual FoxPro, Part 3 by Doug Hennig
16 Best Practices: Best Practices Part IV by Jim Booth Offsite link to http://www.jamesbooth.com
20 New Ways: Handling Code that Changes at Runtime by Tamar Granor, Ph D
23 New Ways: Use FastNoData to drastically improve form load times by Mike Yearwood


March 2009 - Number 7

01 Editorial: Thanks for the Memories (and all the code)! by Doug Hennig
02 VFPX: Fox Tabs the VFP IDE by Rick Schummer
05 Deep Dive: Creating Explorer Interfaces in Visual FoxPro, Part 2 by Doug Hennig
11 Kit Box: So Long and Thanks for all the Fish! by Marcia Akins and Andy Kramek
16 New Ways: The Scope of Things by Tamar Granor, Ph D
19 Best Practices: Best Practices Part III by Jim Booth Offsite link to http://www.jamesbooth.com


January 2009 - Number 6

01 Editorial: VFPS: Visual FoxPro Stack by Ken Levy (also available in French and German)
02 VFPX: Using Desktop Alerts by Rick Schummer
06 Kit Box: Take it up with Management by Marcia Akins and Andy Kramek
10 New Ways: From Type to Type by Tamar Granor, Ph D
13 Best Practices: Best Practices Part II by Jim Booth Offsite link to http://www.jamesbooth.com
17 Extend Excel with VFP!: Using a Visual FoxPro ComServer with Excel (and other VBA applications) by Rainer Voemel


November 2008 - Number 5

01 Introduction by Rainer Becker
02 VFPX: Using the BalloonTip by Rick Schummer
08 Deep Dive: Creating Explorer Interfaces in Visual FoxPro, Part 1 by Doug Hennig
15 Kit Box: A Moving Experience by Marcia Akins and Andy Kramek
18 New Ways: Breaking Up is Not Hard to Do by Tamar Granor, Ph D
21 Best Practices: Best Practices Part I by Jim Booth Offsite link to http://www.jamesbooth.com


September 2008 - Number 4

01 Introduction by Rick Schummer
03 VFPX: Putting the OutlookNavBar to use by Rick Schummer
11 Deep Dive: Practical Uses for XML, Part 2 by Doug Hennig
17 Kit Box: A program is trying to automatically send e-mail by Marcia Akins and Andy Kramek
23 New Ways: Working with text by Tamar Granor, Ph D
27 New Ways: Past or Future Date Range in Reports by Pradip Acharya


July 2008 - Number 3

01 Introduction by Rainer Becker
02 VFPX: ctl32_StatusBar Easy to Implement by Rick Schummer
07 Deep Dive: Practical Uses for XML, Part 1 by Doug Hennig
14 New Ways: Working with work areas by Tamar Granor, Ph D
18 Kit Box: Doing a PROPER job by Marcia Akins and Andy Kramek
21 Vista: Displaying form borders in Windows Vista by Uwe Habermann
24 Events: The DevCon Germany 2007 from a visitors perspective by Boudewijn Lutgerink


June 2008 - Free special issue about ADS
German version, sponsored by Sybase

01 Advantage Database Server fuer Visual FoxPro Entwickler by Doug Hennig


May 2008 - Number 2

01 Introduction by Rainer Becker
02 New Ways: Use the right loop for the job by Tamar Granor, Ph D
06 New Ways: Stroking the Keys by Jim Booth Offsite link to http://www.jamesbooth.com
08 Deep Dive: A Generic Import Utility, Part 2 by Doug Hennig
15 Kit Box: All a matter of form by Marcia Akins and Andy Kramek
21 VFPX: Property / Method Dialog Replacements by Rick Schummer


April 2008 - Free special issue about ADS
sponsored by Sybase

01 Advantage Database Server for Visual FoxPro Developers by Doug Hennig


March 2008 - Number 1

01 Introduction, see Fox RockX Introduction by Rainer Becker
02 VFPX: Open Source Extensions by Rick Schummer
09 Deep Dive: A Generic Import Utility, Part 1 by Doug Hennig
15 New Ways: Parsing und Building File and Path Names by Tamar Granor, Ph D
19 Kit Box: Managing Global Variables by Marcia Akins and Andy Kramek
23 Blog: Advantage Database Server V9.0, available soon by Andy Kramek

July 13, 2015 06:52 PM

Alex Feldstein

July 12, 2015

Alex Feldstein

July 11, 2015

Craig Bailey

Finally using an ad blocker

I’ve resisted using an ad blocker for years, since:

  1. I don’t mind ads
    Especially if they are personalised (as most ads are now). And I’m happy for ads to track me all over the web if it means I get a better ‘ad experience’
  2. I realise many sites rely on ads as their business model
    If it weren’t for them showing ads I wouldn’t get access to many of the useful resources I currently get for free.

But I can’t really resist any longer. The reason: performance

In part, triggered by this post on Daring Fireball, I installed AdBlock (in Chrome) just to see how much of a difference it made to performance.

In a word: heaps

I was kinda blown away by just how much faster everything is when ads are blocked. It’s significant.

The post Finally using an ad blocker appeared first on Craig Bailey.

by Craig Bailey at July 11, 2015 08:35 AM

Alex Feldstein

Rick Strahl's Web Log

Multiple Desktops in Windows

I spent the last month and a half using a Mac and OSx, running both OSx and Windows and while doing that one thing I really appreciated was the use of multiple desktops that OSx supports. It's been especially useful when running parallels which you can set up in such a way that it runs the Windows instance on a separate desktop which is convenient.

I've since switched back to Windows and I have to plead ignorance: I didn't know that Windows has had support for multiple desktops for some time. Multiple desktop support actually harks back all the way to Windows XP, but the operating system didn't officially expose this functionality. However there are a number of utilities out there that you can use to take advantage of multiple desktops – in a limited fashion today.

Windows 10 – Official Multiple Desktop Support

Even better though is that Windows 10 will native support for multiple desktops.  Windows 10 officially adds multiple desktops as a feature as part of a host of new desktop manager features that can be managed through the Windows UI as well as with convenient hotkeys. Hopefully they'll also add support for touch or mouse pad gestures so that you can swipe to new desktops as you can on OSx, but currently I don't see support for that (touch pad vendors would have to provide the gesture mapping support I suppose – then again given how crappy most Windows machine touch pads are maybe that's not such a good idea – my Dell XPS touch is the worst piece of crap I've ever used, amazing that manufacturers can't get such a simple device right).

Anyway, in Windows 10 you can use a number of shortcut keys to manipulate and manage multiple desktops:

Alt-Tab: Bring up the Task View which includes a new Add Desktop option this view also shows you all of your open desktops on the bottom.

desktops

Alt-Ctrl-Left/Right Arrow: Switch between desktops. You can use these key combos, or you can use Windows-Tab and then select the desktop of choice interactively as shown in the screenshot above.

Moving Windows between desktops: You can also move windows between desktops by simply dragging them from the task view on the active desktop onto another desktop on the bottom of the task list.

Advertisement

How useful is this?

I tend to run 2 or 3 monitors (depending on whether I'm on Maui or here in the 'remote office' in Oregon) and then set up 3 desktops:

  • Main Desktop
    This is my main desktop where I do most of my work and get stuff done – mostly development work, business stuff, writing, browsing for research etc.
  • Supplementary Desktop: Media, Email, Twitter, Social Browsing etc.
    I like to set up a separate desktop to keep all the things that I leave open for a long time and get them off my main desktop to make the main desktop less cluttered. If I run music using a music player I really don't want to see Pandora or the Amazon Music player on my desktop. Same goes for email. Gmail or Outlook is always open but I don't want it in my way while I'm working on stuff. For one thing it's a little less distracting – notifications that pop up, pop up on the secondary desktop. Likewise with my Twitter client. Having all that 'distracting' stuff on a second screen keeps the distractions to a minimum. I have to explicitly check over there to get distracted on purpose :-)
  • Web Debug Desktop
    During development I prefer to have all my Web related stuff running on a separate desktop. Typically this means running Chrome with a separate DevTools Window each taking up their own screen in a multi-monitor setup, which makes it very easy to see things happening. By having only the things I need running in this setup it's much easier to see what's going on. Other things I run in this desktop is any test agents and other tools I use to access requests like WebSurge for URL testing of APIs etc. The nice thing is that development and the running application are separated only by the switch desktop key and I can get a much cleaner clutter free view to play with this. It does take some getting used to pressing Windows-Ctrl-RightArrow instead of  Alt-Tabbing to the browser and the dev tools, but that'll happen with time.

Multiple Desktops on older Versions of Windows

Multiple desktops have actually been supported in Windows since Windows XP, but there's not been any official UI built into Windows to create or access those desktops. However there are third party tools you can use to create and manage desktops. The most popular is:

Desktops from Systernals

In typical Systernals tradition, it's a small self-contained utility that provides the core features you need. Desktops is a small tray icon application that allows you manage up to 4 desktops.

When you click on the taskbar icon you get four squares, each of which represents a potential desktop to create:

Desktops[4]

You can then switch desktops by using the pop up view above, or by using a set of hotkeys you can configure as part of the options. Desktops is pretty bare bones. It doesn't have support for closing desktops and you can't move things around, but its simplicity and small size make it a good choice for desktop management.

There are a host of other tools that let you create virtual desktops but most don't actually use this 'hidden' windows feature but rather create their own separate desktops to display and manage. The nice thing about this simple, but basic utility is that it's small and lightweight and works with what's in the Windows box.

Summary

I've only used the new desktop features in Windows 10 for a few days now but I've already found them to be pretty damn useful to keep clutter and distractions to a minimum, especially when coding. So if this is new to you in Windows, it might be worth checking it out. I'm glad to see that this feature has become an officially supported feature in Windows 10.

© Rick Strahl, West Wind Technologies, 2005-2015
Posted in Windows  

by Rick Strahl at July 11, 2015 02:02 AM

July 10, 2015

Alex Feldstein

July 09, 2015

FoxProWiki

EricRudder

Editor comments: Eric Rudder leaves MSFT

[NEW] And he's out! http://www.seattletimes.com/business/microsoft/microsoft-says-elop-rudder-to-leave-in-management-overhaul/

New job for Eric:
Microsoft shuffles enterprise group
CNET News.com / June 23, 2003, 2:41 PM PT
Microsoft announced on Monday a reorganization of its divisions that deal with servers and other back-end business systems, a move analysts characterized as a routine step to clarify the company's business strategy.
The changes occur within the Platforms Group, one of the main branches of Microsoft's business hierarchy. Three units within that group--Developer and Platform Evangelism, Windows Server System, and Enterprise Storage and Management--will be combined under the Servers and Tools division. Senior Vice President Eric Rudder, formerly head of the Developer and Platform Evangelism unit, will lead the new entity.
Rudder has been one of Microsoft's leading sermonizers for the company's .Net Web services strategy.
http://news.com.com/2100-1012_3-1020134.html

July 09, 2015 04:15 PM

Alex Feldstein

July 08, 2015

FoxProWiki

ValidateEmailAddress

Editor comments: Spelling
There are several levels of validation, and it becomes very complicated for complete validation. Because of Mail Gateways and private system conventions, the only way to completely validate an email is to try to send to it and see if it bounces.

For reference, here are a couple of different, known to be valid, address forms:
A: mailbox@domain.tld
B: "Person's Name" < mailbox@domain.tld >
C: firstname.lastname@subdomain.domain.tld
D: mailbox+tag@domain.tld



However, here are the levels at which you can validate:
  1. Format: Make sure there is only one "@", that there is a "." to it's right side, that no two symbols ("@", ".", or "#"... any others?) are next to each other, there are no spaces (unless you want to support format "B" in which case you have to get fancier).
     class= A valid email address does NOT require an "@". I'm running an internal SMTP server here and local mailboxes only require the account name for authentification and delivery, because it can't be anything than the server itself. Demanding an "@" is one of the things that drives me crazy in the Outlook 2003 Junk filter. - Christof Wollenhaupt

     class= You are not right - accoding to mail related RFCs, email address DOES require @ and domain part - even in case of same-domain transmittion (it can be hovever abbreveated in some cases, but not totally removed). So it will be better for your admin to adhere standards. BTW email address and "account name for authentification" (the one you use in mail clients) ARE different things - second one doesn't requite @ sign. -- Igor Korolyov
  2. Server Existence: Extract the domain name (including tld) and use a lookup to find the MX record associated with that domain, then ping the MX server. Pinging the domain is not good enough because it may not be associated with anything, though it still has a good MX record, and can therefore receive mail. I don't know offhand how to get the MX record, though.
  3. Mailbox Existence: Get the MX record, as above, then connect to the mail server, and use the VRFY command to verify that it will deliver to the provided mailbox.
  4. Delivery: Actually send the message, then wait to see if it gets bounced. Just because the VRFY command says the server CAN deliver to the mailbox doesn't mean that the server WILL deliver that particular message. Spam filters, etc, could cause the delivery to fail.

July 08, 2015 12:05 PM

Alex Feldstein

July 07, 2015

FoxProWiki

UpcomingEvents

Editor comments: Philly July
A place to list upcoming Visual FoxPro events like conferences, meetings, user groups, open training sessions...
Closest at the top please, and please remove past events.

July 07, 2015 08:20 PM

PhiladelphiaVFPUserGroup

Editor comments: July: Tamar Granor on Optimization
Starting in August 2008, we meet the second Tuesday of the month.

A user group for Visual FoxPro developers in the greater Philadelphia area, including New Jersey, Delaware and the Lehigh Valley. We meet the second Tuesday of each month at 7 PM.

Beginning with the April 2006 meeting, there is a $5 charge per meeting.

Beginning with the July 2011 meeting, we will meet in room 158 at DeVry University, 1140 Virginia Drive, Fort Washington, PA. Beginning with the October, 2014 meeting, we're moving to room 104 at DeVry.

Feel free to bring something to eat and arrive as early as 6:30.

 class= Check out our blog at vfpphilly.blogspot.com.
 class= We're on Twitter: @VFUGPhilly

If you'd like to speak at our group or join our email list, send a message to tamar@tamargranor.com

July 07, 2015 08:18 PM

FoxCentral News

Philadelphia VFP User Group: July 14--Tamar E. Granor on Optimization

The next meeting of the Philadelphia Visual FoxPro User Group will be Tuesday, July 14 at 7:00 PM in room 104, DeVry University, 1140 Virginia Drive, Fort Washington, PA. Feel free to come as early as 6:30 and bring some dinner. Tamar E. Granor will do a Southwest Fox preview of her session ?Can?t this application go any faster?? Abstract: What do you do when your customer says that your application is too slow? How can you figure out what's slowing things down? How can you make it faster? Optimization of a VFP application is more than just applying Rushmore correctly, though that's an important step. In this session, we'll explore techniques for measuring performance of a VFP application and look at things you can do to speed it up.

by Philadelphia Visual FoxPro User Group at July 07, 2015 07:22 PM

VFP Philly

July 14: Tamar E. Granor: “Can’t this application go any faster?”

Our next meeting will be Tuesday, July 14. Tamar E. Granor will do a Southwest Fox preview of her session “Can’t this application go any faster?”

Abstract: What do you do when your customer says that your application is too slow? How can you figure out what's slowing things down? How can you make it faster?

Optimization of a VFP application is more than just applying Rushmore correctly, though that's an important step. In this session, we'll explore techniques for measuring performance of a VFP application and look at things you can do to speed it up.


by Tamar E. Granor (noreply@blogger.com) at July 07, 2015 07:16 PM

Alex Feldstein

A wonderful aircraft

Last weekend I flew again in a friend's wonderfully refurbished Great Lakes 2T-1A biplane (built in the 1970s).

Last time I flew was over two years ago. That time it looked different. She just came out of a two-year rebuilding process. Overhauled engine to new factory specs, new skin and new paint job.


This is what she looks like today, isn't she a beauty?




This is what she looked like back then:




We had a blast flying in South Florida.

by Alex Feldstein (noreply@blogger.com) at July 07, 2015 04:24 PM

Sandstorm's Blog (Home of ssClasses)

ExcelPivot (ssUltimate)

Got an extra time today so I decided to move ssExcelPivot class of ssClasses to ssUltimate library which housed my latest classes.  And so this entry here.

What is New?

Basically, it is ssExcelPivot with some changes as follows:

  • Changed the buttons to native VFP commandbutton 
  • Changed ssSwitch into SwitchX for better appearance
  • Changed optSwitch into OptionSwitchX for better appearance
  • Added Miscellaneous section that allows user to interactively change some properties.  More options than ssExcelPivot
  • Added Show/Hide Grand Total capability 
  • Added ToolTips on all SwitchX and OptionSwitchX to make things more informative
  • Rearranged objects, to make the popup form cleaner


What is this Class?

Briefly, it is designed to allow developer to create any valid pivot report of their choosing from either a table or cursor straight into Excel; even without a single knowledge on Excel automation. As long as you know how to create a cursor, then you are done.

This class allows developer to set a fixed report as can be seen here.  Those are the default arrangements and fields I set for that specific report; via codes.  However, interactively, any user can still change the pivot report by dragging and dropping fields onto their respective section such as columns, values, rows and filters; before sending those over to excel.

What they cannot do interactively here is word-wrapping which I retain purely via codes for memo fields converted into character type (Excel do not accept a memo field).  Word wrapping is done by suffixing the field with colon and the width, e.g., Remarks:30, meaning Field Remarks will be of 30 width inside an excel cell and if content is long, it will be word-wrapped.

Anyway, just check the old ssExcelPivot posts if you are curious as to what others this class can do.




by Jun Tangunan (noreply@blogger.com) at July 07, 2015 05:32 AM

Alex Feldstein

July 06, 2015

FoxCentral News

Southwest Fox/Xbase++ 2015: Definite Go!

We're delighted to tell you that we met and made the decision to move forward with this year's conferences. Registrations to date are almost identical to last year's on the same date, giving us confidence that the conferences remain financially feasible. We're also happy to tell you that we've added Rick Borup to the speaker list. Thanks to those who have registered so far. We really appreciate it. That said, we still need your help to pull this off. Gather, share, learn, expand your knowledge in Gilbert.

by Southwest Fox Conference at July 06, 2015 11:39 PM

Shedding Some Light

Southwest Fox/Xbase++ 2015: Definite Go!

We’re delighted to tell you that we met (via Skype) and made the decision to move forward with this year’s conferences. Registrations to date are almost identical to last year’s on the same date, giving us confidence that the conferences remain financially feasible.

We’re also happy to tell you that we’ve added Rick Borup to the speaker list (see link for his bio and sessions).

Thanks to those who have registered so far. We really appreciate it. That said, we still need your help to pull this off. Gather, share, learn, expand your knowledge in Gilbert.

Registrations still available at http://geekgatherings.com/registration. Please do not hesitate.

by Rick Schummer at July 06, 2015 11:23 PM

Doug Hennig

Southwest Fox/Southwest Xbase++ 2015 Are a Go!

I’m delighted to tell you that Rick, Tamar, and I met (via Skype) and made the decision to move forward with this year's conferences. Registrations to date are almost identical to last year's on the same date, giving us confidence that the conferences remain financially feasible.

We're also happy to tell you that we've added Rick Borup to the speaker list. I’m especially looking forward to his Version Control Faceoff: Git vs Mercurial session, because while I use Mercurial daily, I’m not that familiar with Git.

See you in October!

by Doug Hennig (noreply@blogger.com) at July 06, 2015 07:47 PM

Alex Feldstein

July 05, 2015

Rick Strahl's Web Log

Windows 10 Upgrade and IIS 503 Errors

After upgrading my machine to Windows 10 today I found that IIS, while working was throwing 503 Service Unavailable errors on every page. Turns out the issue is the Rewrite Module wasn't updated in the upgrade and that's causing a hard crash of the IIS module. Here's how to fix this issue.

by Rick Strahl at July 05, 2015 09:18 PM