Test Driven Development with Apex Triggers on Force.com

In an earlier blog, we examined a simple example of Test Driven Development (TDD). Here, we dive into a real-life example of using TDD to develop production Apex code for Salesforce CRM.

How does TDD work in practice?


Let’s walk through a non-trivial example of using test-driving development to craft a complex Apex trigger.

Problem: An off-the-shelf integration requires the existence of a specific Salesforce CRM Opportunity object. When the object is present, the integration acts on the Account related to the Opportunity. When the object is not present. the integration bypasses the Account.

User Story: As a CRM User, I need to easily manage the Opportunity that signals the integration, for example, by selecting a checkbox that creates or removes the related integration object.

Solution: Provide an Account trigger that observes the checkbox and inserts or deletes the related integration object.

How do we test an Apex trigger?

In the case of an Apex trigger, we can’t invoke the behavior directly. The purpose of a trigger is to create a side effect when we insert, update, or delete objects. Basically, we do something like

| Account a = new Account(name = ‘test’,);
| insert(a);

and observe the state changes.

(Note: Another approach is to have the trigger call a worker class, and then write tests against the worker class. Here, we are using the direct approach, and keeping the trigger code in the trigger.)

As we saw in Part 1, a good place to start any coding exercise is to clearly define the expected state change. We can then code tests against the expected state changes, call the relevant DML operations (insert, update, and/or delete), and observe the outcome. (DML = Database Manipulation Language.)

Let’s express our expectations (or requirements) in the form of a documentation comment for the trigger under test.

// Integration Account trigger requirements

/
Manages a2z Opportunity by referring to the “Do Import” checkbox on the Account object (NU_isA2zImport__c). When an a2z Opportunity exists, the Account is imported to a2z.
The trigger handles three transitional states: (1) if insert and doImport, insert opportunity;
(2) if update and !doImport and hasOpp, delete opportunity; (3) if update and doImport and !hasOpp, insert opportunity.
By inference, the trigger also handles: (4) if insert and !doImport, exit;
(5) if update and doImport and hasOpp, exit; (6) if !doImport and !hasOpp, exit.
*/

To start coding the trigger that implements this logic, according to test driven development, we need a failing test. Let’s start with (1) and write a test for the Insert case. Since this is a non-trivial example, there is some scaffolding for the test.

// Test for Insert case

final static Id A2Z_RECORD_TYPE_ID = Schema.SObjectType.Opportunity.
getRecordTypeInfosByName().get(‘a2z’).getRecordTypeId();
final static String A2Z_STAGE_NAME = ‘Closed Won’;

/
Exercises “Insert Import” (1) by inserting an new Account with the checkbox set, and observing whether a corresponding Opportunity is created.
/
static testMethod void testInsImp() {
// Bootstrap a test account, passing in values for needed fields.
Account a = new Account(name = ‘test’, NU_isA2zImport__c=true);
// Insert and Select the test Account
insert(a);
// Do we have an a2z opp?
Integer opps = [
SELECT COUNT()
FROM Opportunity
WHERE AccountId = :a.id
AND RecordTypeId = :A2Z_RECORD_TYPE_ID
AND StageName = :A2Z_STAGE_NAME
];
Boolean hasOpp = (opps>0);
System.assert(hasOpp,’Expected opp on insert.’);
}


When we run this test, it fails, because no one has written a trigger to insert the opportunity when the checkbox is true. Next!

// Code for the Insert case

/** Manages a2z Opportunity by referring to the “Do Import” checkbox
on the Account object.
/
trigger NU_a2zCreateOpportunity on Account (after insert, after update) {

final static Id A2Z_RECORD_TYPE_ID = Schema.SObjectType.Opportunity.
getRecordTypeInfosByName().get(‘a2z’).getRecordTypeId();
final static String A2Z_STAGE_NAME = ‘Closed Won’;

// (1) On insert, if isImport, insert opp
if (Trigger.isInsert) {
List delta = new List();
for (Account a : Trigger.new) {
if (a.NU_isA2zImport__c) {
Opportunity o = new Opportunity(
AccountId = a.Id,
Name = ‘a2z’,
RecordTypeId = A2Z_RECORD_TYPE_ID,
StageName = A2Z_STAGE_NAME,
CloseDate = Date.Today()
);
delta.add(o);
}
}
if (delta.size()>0) insert(delta);
}

When we run the test again, it succeeds, because we now have the trigger code that inserts the a2z Opportunity.

(Note: If you haven’t written Apex triggers, the for-loop might seem odd. As a performance tweak, Apex triggers work with batches. Most times, it’s a batch of one. But, if data is being imported, a batch could contain 200, or even 2000, objects to insert, update, or delete.)

Since unit testing is baked into Apex, we don’t need to undo any database operations that occur with an actual testMethod. While the testMethod is running, the trigger can insert all the opportunities it likes, but when the test ends, Force.com rolls it all back for us. No fuss. No muss. No left-over cruft.

Looking back at the code, I see things I don’t like. There are redundant constants, and we are also doing a lot of heavy lifting inline, obscuring the flow of the code. Since we have a passing test, let’s improve the design, and run the test again.

First, since we will need to share constants and helpers between the tests and the code-under-test, let’s extract existing code into a static helper class.

// Extracting a utility class to share test and domain code.

/** Encapsulates tools used by a2z Opportunity test and domain code.
/
public Class NU_a2zOpps {

/**
Defines an a2z Opp with a particular Record Type ID and Stage Name.
(TODO: Transfer to custom settings and expose to validation rule.) /
final static public String A2Z_STAGE_NAME = ‘Closed Won’;
final static public Id A2Z_RECORD_TYPE_ID = Schema.SObjectType.Opportunity.
getRecordTypeInfosByName().get(‘a2z’).getRecordTypeId();

/
Declares a default name for generated opportunities. /
static public String A2Z_NAME = ‘a2z Import’;

/

Returns a ready-to-use a2z Opportunity. /
static public Opportunity newOpp(Account a) {
return new Opportunity(
AccountId = a.Id,
RecordTypeId = A2Z_RECORD_TYPE_ID,
Name = A2Z_NAME,
StageName = A2Z_STAGE_NAME,
CloseDate = Date.Today()
);
}

/
Determines if a given Account has an A2z Opportunity. /
static public Boolean hasOpp(Account a) {
Integer opps = [
SELECT COUNT()
FROM Opportunity
WHERE AccountId = :a.id
AND RecordTypeId = :A2Z_RECORD_TYPE_ID
AND StageName = :A2Z_STAGE_NAME
];
return (opps>0);
}
}

Replacing the extracted code with references to the static class, our test and trigger are now easier to follow.

// Test and domain classes refactored to use new utility class.

@isTest
private class NU_TEST_a2zCreateOpportunity {

/
Generates an account with the NU_isA2zImport__c raised or lowered. Assumes default is false.
/
static Account newAccount(Boolean doImport) {
Account a = new Account(Name = ‘Test Account’,
CompanyNumber__c = ‘1’;
if (doImport) a.NU_isA2zImport__c = true; // False is default
return a;
}

/**
Inserts the given account, and returns it again from a select.
/
static Account doInsert(Account a) {
insert(a);
return [
SELECT id, NU_isA2zImport__c, CompanyNumber__c
FROM Account
WHERE id = :a.id
];
}

/**
Exercises “Insert Import”.
/
static testMethod void testInsImp() {
Account a = newAccount(true);
Account a2 = doInsert(a);
System.assert(NU_a2zOpps.hasOpp(a2), ‘Expected opp on insert.’);
}
}

trigger NU_a2zCreateOpportunity on Account (after insert, after update) {
// (1) On insert, if isImport, insert opp
if (Trigger.isInsert) {
List delta = new List();
for (Account a : Trigger.new) {
if (a.NU_isA2zImport__c) {
delta.add(NU_a2zOpps.newOpp(a));
}
}
if (delta.size()>0) insert delta ;
}
}

We implemented the first requirement using a classic TDD pattern:
Create a failing test that proves a desired behavior is not present.
Write just enough code to pass the test. Once the test succeeds, improve the design (refactor) so that it’s easy to maintain.Let’s continue to follow the TDD pattern with our second requirement: “if update and !doImport and hasOpp, delete opp”.
First, the failing test:

// Test deleting a related opportunity

static testMethod void testUpdImpNoImp() {
Account a = newAccount(true); // Import
Account a2 = doInsert(a); // Line 2
a2.NU_isA2zImport__c = false; // No Import
update a2; // Line 4
System.assert(!NU_a2zOpps.hasOpp(a2),’Expected no opps on update.’);
}


When we run the test, it fails, because our trigger inserts a new a2z Opportunity (at line 2) but does not delete the Opportunity (at line 4).

OK, let’s update the trigger to provide the behavior expected by line 4. As before, we need to write the trigger to loop through a batch, while minimizing database calls to stay within governor limits.

// (2) On update, if not isImport and haveOpp, delete opp

if (Trigger.isUpdate) { Map opps = NU_a2zOpps.getOpps(Trigger.new);
Set ids = opps.Keyset();
List omega = new List();
for (Account a : Trigger.new) {
if ( !a.NU_isA2zImport__c && ids.contains(a.Id)) {
omega.add(opps.get(a.Id)); // (2)
}
}
if (omega.size()>0) delete omega;
}


The crux of the code change is determining if we have a related opportunity to delete. Easy enough with one Account, but in the case of a trigger, we might have to check 200 accounts, and our query limit is only 100. Since the trigger passes us the set of Accounts in the batch, it’s not difficult to retrieve the set of Opportunities related to those Accounts. Though, now that we have a utility class, we should keep the query details encapsulated behind another helper method.

The getOpps helper method returns a Map itemizing the accounts in our batch that have related a2z opportunities.

// The getOpps helper method

static public Map getOpps(List accounts) {
List oppsList = new List();
Map opps = new Map();
Integer count = [
SELECT COUNT() FROM Opportunity WHERE RecordTypeId = :A2Z_RECORD_TYPE_ID AND StageName = :A2Z_STAGE_NAME AND AccountId IN :accounts ];
if (count>0) {
oppsList = [
SELECT AccountId FROM Opportunity
WHERE RecordTypeId = :A2Z_RECORD_TYPE_ID
AND StageName = :A2Z_STAGE_NAME
AND AccountId IN :accounts
];
for (Opportunity o : oppsList) {
opps.put(o.AccountId,o);
}
}
return opps;
}


A critical clause in the helper’s SELECT statement is “AccountId IN :accounts“. This clause ensures that we only retrieve the Opportunities that are related to Accounts in the current batch. Without this clause, we could retrieve more Opportunities than allowed by the Force.com governor (50,000). The helper also makes a point of returning an empty Map if there are no matching opportunities, simplifying life for the caller.

While we’ve been coding the trigger to act on a batch, our tests have not been passing a batch of objects to the trigger. Let’s add a test to be sure batch mode is working.

// Test to verify that insert works with batches of records

/
Exercise Insert Import in batch mode.
/
static testMethod void verifyBatchInsert() {
List rows = new List();
for (Integer r=0; r<200; r++) {
rows.add(newAccount(true, r));
}
insert(rows);
List inserted = [
SELECT id, NU_isA2zImport__c, CompanyNumber__c
FROM Account
WHERE id in :rows
];
Set ids = NU_a2zOpps.getOpps(inserted).Keyset();
Boolean success = true;
for (Account a : inserted) {
success = success && ids.contains(a.Id);
}
System.assert(success,’Expected opps on batch insert.’);
}


Running the test, initially, we hit a problem with a helper method. This particular organization includes an external ID that must be unique for each record. In batch mode, our external IDs are not unique, and so we hit a validation error. A quick fix is to pass in the counter from the loop, creating a serial number for each Account.

// newAccount with offset parameter

static Account newAccount(Boolean doImport, Integer offset) {
Account a = new Account(
Name = ‘Test Account’,
CompanyNumber__c = String.valueOf(offset)
);
if (doImport) a.NU_isA2zImport__c = true; // False is default
return a;
}

// for backward-compatibility
static Account newAccount(Boolean doImport) {
return newAccount(doImport, 0);
}


And, a batch delete test.

// batchDelete

static testMethod void batchDelete() {
List rows = new List();
for (Integer r=0; r<200; r++) {
rows.add(newAccount(true, r));
}
insert rows;
List inserted = [
SELECT id, NU_isA2zImport__c, CompanyNumber__c
FROM Account
WHERE id in :rows
];
for (Account a : inserted) {
a.NU_isA2zImport__c = false;
}
update inserted;
Set ids = NU_a2zOpps.getOpps(inserted).Keyset();
Boolean success = true;
for (Account a : inserted) {
success = success && !ids.contains(a.Id);
}
System.assert(success,’Expected no opps on batch delete.’);
}


Both of the new tests are passing, but they seem to repeat a lot of code, and we have a third requirement coming up that will also need batch mode testing. Let’s see if we can create a helper class that can serve both tests.

// batchHelper method

static void batchHelper(Boolean insImp) {
List rows = new List();
for (Integer r=0; r<200; r++) {
rows.add(newAccount(insImp, r));
}
insert rows;
List inserted = [
SELECT id, NU_isA2zImport__c, CompanyNumber__c
FROM Account
WHERE id in :rows
];
// For delete, we need to update the flag
if (!insImp) {
for (Account a : inserted) {
a.NU_isA2zImport__c = false;
}
update inserted;
}
Set ids = NU_a2zOpps2.getOpps(inserted).Keyset();
Boolean success = true;
if (insImp) {
for (Account a : inserted) {
success = success && ids.contains(a.Id);
}
System.assert(success,’Expected opps on batch insert.’);
} else {
for (Account a : inserted) {
success = success && !ids.contains(a.Id);
}
System.assert(success,’Expected no opps on batch delete.’);
}
}

/
Exercises Batch insert.
/
static testMethod void batchHelperIns() {
batchHelper(false);
}

/*
Exercises Batch delete.
/
static testMethod void batchHelperDel() {
batchHelper(true);
}


By passing a flag, we are able to use one utility for both cases, and share 90% of the code.

Repeating the patterns we’ve seen, we can test and refactor our way into a robust, reliable trigger to manage our integration object.

For the complete production source code (without the play-by-play), see Managing a Related Object via a Checkbox in Salesforce CRM.

Key takeaways are: Use a utility class to share code between test and domain classes.
Use helper methods to share code between similar test methods. Use the SELECT-IN-SET pattern to keep triggers within governor limits.Test Driven Development is a rigorous, structured approach that helps us create robust and reliable code. Since TDD uses successive refinement, we can easily extend and improve the code over time.

Test Driven Development with Apex on Force.com


As far as I know, Force.com – the software development platform for Salesforce CRM – is the only platform that requires unit test coverage for production code. Before an Apex developer can deploy custom code to a production environment, the overall unit test coverage for the environment must be 75% or better.

What is Unit Test Coverage?

Let’s look at a simplistic, contrived example of unit test coverage. For demonstration purposes, the HelloWorld Class shows an Apex function that tests whether the text of a string matches “Hello World” or not.
// HelloWord Class

public class HelloWorld {
public static String isHelloWorld(String myString) {

if (myString.equals(‘Hello World’)) return ‘Yes it is!’; // Line 1
else return ‘No it is not!’: // Line 2
}
}

The verifyIsHelloWorld test method exercises our gratuitous example

// verifyIsHelloWorld test method

@isTest
private class TEST_HelloWorld {
static testMethod void verifyIsHelloWorld () {
String outcome = HelloWorld.isHelloWorld (‘Hello World’);
System.assert(outcome == ‘Yes it is!’, ‘Expected positive message.’) ;
}
}

At this point, we have 66% test coverage, because our test exercises only one of the two statements in isHelloWorld (line 1).

To bring test coverage up to 100%, we need to add another test method.

// Another test method

static testMethod void verifyIsHelloWorldFalse() {
String outcome = HelloWorld.isHelloWorld (‘Hello Kitty’);
System.assert(outcome == ‘No it is not!’, ‘Expected negative message.’) ;
}


With both of these test methods in play, our code now has 100% coverage.

Why is test coverage important?

Salesforce.com has high standards for its own code and expects custom Apex code to also be robust and error-free. One of the best ways to increase code quality is to encourage developers to write unit tests. Witness a 2005 study found that unit tests increased both coder productivity and code quality.

While it’s possible for developers to boost code coverage with pointless tests, hardcore coders see unit tests as a way to release better code sooner – the keyword being “release”. The time spent on proactive unit testing is a trade-off with the time spent on reactive debugging. We can find our own bugs ourselves with unit tests, or wait and fix them later when a feature comes back with a QA ticket attached.

In my own work, I’ve found that the best way to ensure that code has adequate test coverage to practice test-driven development (TDD).

What is Test Driven Development (TDD)?

For the uninitiated, a classic way to bootstrap unit testing (and TDD) is to start with defect reports. Before fixing a bug, a developer first writes a test that proves that the defect exists. For example, if someone reports that isHelloWorld fails if we pass in a null string, we could start with a test like the one shown by verifyIsHelloWorldNull.

// verifyIsHelloWorldNull

static testMethod void verifyIsHelloWorldNull() {
String outcome = HelloWorld.isHelloWorld (null);
System.assert(outcome == ‘No it is not!’, ‘Expected negative message.’) ;
}


If we run this test, it raises an exception “System.NullPointerException: Attempt to de-reference a null object”.

Since an exception counts as a failing test, we can proceed with the fix, say, by changing the code from:

| if (myString.equals(‘Hello World’)) return ‘Yes it is!’; // Line 1

to:

| if ‘HelloWorld’.equals(‘myString’) return ‘Yes it is!’; // Line 1

and maybe, for good measure, including a fourth test case for an empty string.

// Test for empty String

final static String EMPTY = ‘’;
static testMethod void verifyIsHelloWorldEmpty() {
String outcome = HelloWorld.isHelloWorld (EMPTY);
System.assert(outcome == ‘No it is not!’, ‘Expected negative message.’) ;
}


Once all of our tests are passing, we could even refactor the code, and improve the internal design by using a constant and a single comparison.

// Code refactored

public class HelloWorld {
public final static String HELLO_WORLD = ‘Hello World’;
public final static String YES_WORLD = ‘Yes it is!’;
public final static String NO_WORLD = ‘No it is not!’;
public static String isHelloWorld(String myString) {
return (HELLO_WORLD.equals(myString)) ? YES_WORLD : NO_WORLD;
}
}


If our tests pass (they do), we can be confident that our refactoring did not break the code’s external behavior. Passing tests give us the courage to refine existing code and improve the internal design.

The key idea behind TDD is to “never write a line of code without a failing test”. If we are going to write the test anyway, better to write it first, code to the test, and receive full benefit for the time we invest.

How do we test code that doesn’t exist?

In an Apex environment, a unit test usually operates at the class level. To bootstrap testing a class or method that does not exist, we can start coding the test, create a stub class with stub methods, sufficient to compile the test, confirm that it fails, and then fill-in functionality to pass the test.

Let’s start over from scratch. First, we should define our requirements for the isHelloWorld method.

// isHelloWorld requirements

/**

  • The isHelloWorld method determines if a String equals ‘Hello World’.
  • (1) Given the String ‘Hello World’, the method returns “Yes, it Is.”
  • (2) Given some other String, the method returns ‘No, it is not!’.
  • (3) Given a null or empty string, the method returns ‘No, it is not!’.
    */

    Then, we can write a “happy path” test for the first requirement.


    // A happy path test for requirement (1)

final static String POSITIVE = ‘Expected positive message.’;
static testMethod void verifyIsHelloWorld () {
String outcome = HelloWorld.isHelloWorld (‘Hello World’);
System.assert(outcome == HelloWorld.YES_WORLD,POSITIVE ) ;
}

provide a stub Hello World class to compile the test



// A HelloWorld stub class

public class HelloWorld {
public static String isHelloWorld(String myString) {
return null;
}
}

add just enough behavior to pass one test for one requirement


// Coding requirement (1)

public final static String HELLO_WORLD = ‘Hello World’;
public final static String YES_WORLD = ‘Yes it is!’;

public static String isHelloWorld(String myString) {
return HELLO_WORLD.equals(myString) ? YES_WORLD : return null;
}

then another requirement


// Testing requirement (2)

final static String NEGATIVE =’Expected negative message.’;
static testMethod void verifyIsHelloWorldFalse () {
String outcome = HelloWorld.isHelloWorld (‘Hello Kitty’);
System.assert(outcome == HelloWorld.NO_WORLD,NEGATIVE ) ;
}

// Coding requirements (1) and (2)

public final static String HELLO_WORLD = ‘Hello World’;
public final static String YES_WORLD = ‘Yes it is!’;
public final static String NO_WORLD = ‘No it is not!’;

public static String isHelloWorld(String myString) {
return HELLO_WORLD.equals(myString) ? YES_WORLD : NO_WORLD;
}

and a third


// Testing requirement (3)

static testMethod void verifyIsHelloWorldNull() {
String outcome = HelloWorld.isHelloWorld (null);
System.assert(outcome == HelloWorld.NO_WORLD,NEGATIVE );
}

final static String EMPTY = ‘’;
static testMethod void verifyIsHelloWorldEmpty() {
String outcome = HelloWorld.isHelloWorld (EMPTY);
System.assert(outcome == HelloWorld.NO_WORLD,NEGATIVE );
}

For requirement 3, we added two test methods, but did not need to change any code, since the current implementation passed the tests.

To fully test the method, we might also add a test for a string of maximum length, so that we test both boundaries. But, as it stands, we have 100% test coverage, and a test for each stated requirement, which meets my own personal “definition of done”.

Three takeaways from this exercise are:

  • Never write a line of code without a failing test.
  • Test every requirement, one requirement at a time.
  • Passing tests give us the courage to refactor.
    In a followup blog, TDD with Apex Triggers, we look at how Test Driven Development works in practice, with a real-life non-trivial example.

Keeping Salesforce Implementations on Track with Milestones PM and ChangeIT

A secret to the success of Salesforce CRM is how easy it is to customize and extend. Out of the box, Salesforce provides new users with a world-class framework for enabling collaboration between staff members and customers, or between staff and staff, or even between customers and customers (if you dare!). If that wasn’t enough, Salesforce provides a variety of ways to tailor your instance of the framework, so that it fits your own processes like a glove.

When folks first implement Salesforce, it’s easy to get carried away. Salesforce CRM can do so much, it’s tempting (but not practical) to try and do everything at once. In fact, Salesforce.com recommends that people spread out customization plans, so that refinement becomes an ongoing process.

Happily, two of the many Salesforce extensions include a project tracking app called “Milestones PM“ and a change tracking app called “ChangeIt“. You can use Milestones PM to organize implementation projects, and ChangeIt to gather and prioritize new change requests. Both can be installed into your environment from the Salesforce AppExchange, free of additional charge.

Milestones PM

Milestones PM is an elegant approach to project tracking that makes it easy to capture and follow a classic work breakdown structure. It’s a great fit for IT projects, but the app would work just as well for any type of task-based project, such as organizing a retreat, publishing a newsletter, or planning an office move.

The application uses six objects to track progress: Project, Log, Milestone, Task, Time, and Expense, as shown in the illustration.

Project is the top-level container for the other five objects. Aside from tracking key details – like a Kick-Off Date, Deadline, and Description – a Project contains a set of Milestones, along with an optional Log records.

Logs are generic memo records that you can attach to a Project, Task, Time, or Expense record. Logs can be used to capture any miscellaneous detail that doesn’t fit neatly into the other fields.

Milestones are the key organizing object within a project. To be useful, a Project must contain at least one Milestone record, which in turn can contain Task, Time, or Expense records. Milestones can have their own Kickoff and Deadline Dates, and be linked to Parent, Predecessor, or Successor Milestones. While not as featureful as Microsoft Project dependencies, good use of the Parent, Predecessor, and Successor Milestone groupings can make complex projects easier to understand and navigate.

The backbone of any project are the Tasks. Every Task must have a Name and be assigned to a Milestone, and can also track an Assignee, Start and Due Dates, statuses like Priority (0-4), Stage (In progress, Resolved, Closed) and Class (Ad Hoc, Defect, Rework), Estimates, and other properties.

Time and Expense records can be attached to Tasks, which are tallied as part of the Project’s overall metrics.

As shown in the screen shot, Milestones PM makes excellent use of native Salesforce metrics, and integrates with other native features like Chatter and Calendar It also supports batch operations based on views and comes bundled with two dozen ready-to-run Reports.

Even better, Milestones PM is an Aloha App, so it doesn’t count against the number of tabs or objects your Salesforce instance consumes.

ChangeIT

Like housework, a good Salesforce implementation is never done. The platform is so flexible and so deep, there will always be ways to make it work even better for your users. The ChangeIT application provides a vehicle for tracking new features and fixes (which you could then turn into Milestone PM tasks).

In a nutshell, ChangeIT provides a simple way to manage changes to your Salesforce instance, and to notify team members when changes are scheduled or implemented. It also helps coordinate change requests, so developers and administrators are not stepping on each other.

The application provides a single, simple tab with a form for making change requests, as shown in the illustration.

Saving the initial request triggers an approval workflow to the individuals that you set up. Once the initial request is approved or denied, the request can be worked on by developers or administrators (and/or transferred to Milestones PM). The application also includes a dashboard and supporting reports to view the pipeline of request changes.

Since the applications are independent, you can use either or both – your instance, your choice!

We’re always exploring better approaches to project tracking, with applications like Basecamp, Tom’s Planner, JIRA, and OnTime. Do you have a favorite tool? What features do you love? What features do you miss?

Ted @ ApacheCon


Ted Husted of NimbleUser will be speaking at ApacheCon in Vancouver BC CA on November 10 and 11 on “The Secret Life of Open Source” and “.NET @ Apache.org”.

The Secret Life of Open Source

Apache, GNU, Mozilla, Ubuntu, PHP, LibreOffice, Wikipedia – Today, there are hundreds of open source groups, each with its own culture, methodology, and governance model.

  • How are these groups alike?
  • How are they different?
  • Is there one true path to open source enlightenment, or do many paths converge around a common singularity?
    .NET at Apache.org

Like it or not, many open source developers are moving to the Microsoft .NET platform, and we’re bringing our favorite tools with us! This session looks inside ASF projects that are creating software for .NET and Mono – like ActiveMQ, Chemistry, Logging, Lucene, QPid, and Thrift – and show how to create leading-edge ASP.NET applications with ASF open source libraries. We’ll also look at integrating other .NET open source projects, like Spring.NET, NVelocity, and JayRock, into your C# application to create a complete open source .NET stack.

Before joining NimbleUser, Ted consulted with teams throughout the United States, including CitiGroup, Nationwide Insurance, and Pepsi Bottling Group, and he is a regular speaker at ApacheCon and the Ajax Experience. Ted is also a former member of the Apache Struts project and co-founder of the Apache (Jakarta) Commons. His books include Google Wave (Preview) Explained, JUnit in Action, Struts in Action, and Professional JSP Site Design.

As a Business Analyst for NimbleUser, Ted concentrates on identifying business needs and crafting solutions that meet an organization’s goals, objectives, and budget. Locally, he also serves as VP of Finance for the Rochester Chapter of the International Institute for Business Analysis (IIBA).

Salesforce User Group - Speed Demoing


Ten minutes doesn’t sound like much, but a handful of presenters cut to the chase and delivered some great Salesforce tech in 600 seconds each Thursday afternoon at a Speed Demo session of the Rochester Salesforce User Group, hosted by Mark Cook of the Rochester Group at 600 Park Avenue.

Create and update fields en masse (Tom Patros)

Using the supremely helpful Force IDE (an Eclipse plugin), Tom showed how easy it can be to mass edit fields properties via XML metadata. Developers already use the IDE to create and text Apex classes and triggers. Tom walked through how admins can use the IDE to quickly add or update custom fields by editing the XML metadata definitions the Force IDE generates. While easy to use, the Salesforce graphical user interface can become tedious when mass editing a set of fields, making editing via XML seem like a breath of fresh air. You can even manage picklist definitions by editing the metadata. For extra credit, Tom also demoed a Google spreadsheet that can be used to craft and document the schema, and then generate the definition to use with the IDE.

Connecting with SFDC API’s from Access (Bob Scott)

One of the most popular Salesforce admin tools is the Excel connector that makes it easy to manage Salesforce data directly from Excel. Bob Scott walked through how Access can be used to, well, access Salesforce data in much the same way. With a bit of elbow grease, you can update SFDC from an Access application, or update SFDC from Access.

Approval process simplified (Theresa Mason)

Often, an Opportunity or Quote needs to go through an approval process, and to make it through the gauntlet, certain fields must be completed. Getting users to remember which fields to fill out first can be a challenge, especially in the excitement of taking an opportunity to the next level. Theresa showed us how to add a validation to a checkbox, to trigger an automatic review of a record submitting it to an approval workflow. Great way to reduce many rules to one: Check the box and follow the instructions.

Synchronizer data mover (Rich Bilsback)

Synchronizer is a Microsoft Access database that helps you automate data tasks in Salesforce. Rich walked us through how he uses the Syncronizer to keep local application data updated, and how to use a local application to update your Salesforce data in the cloud. To get started, grab the Synchronizer from the Sales Force AppExchange. It’s a free “Aloha” application distributed by the Force.com Labs.

Stayed turned to the Rochester Salesforce User Group on Linked In for more Salesforce Thursdays coming this summer.