Don’t Walk Under Ladders

The following is an opinion piece; it reflects my personal preferences; it demonstrates a coding construct that works well for me.

This is the 15th post in my series: Coding Without the Jargon, where I show how I use condition ladders.

Condition Ladders

The construct is not something I invented, but I like to think I came up with the name.

Check it out for yourself and try Google-ing
"condition ladders" coding
and there’s a good chance you’ll bump into one of my blog posts as the top ‘hit’.

The construct is nothing more than a non-looping loop. In C/C++ it uses
do { } while(false);

I call this loop ‘the ladder’.

Inside the loop, you can use a break; statement to ‘fall of the ladder’ and skip ahead to the while (false);.

Something like:

do {

   if (! canPerformTheTask()) { // Non-error reason to fall off ladder
      break;
   }

   if (! precondition(data)) { // Problematic reason to fall off ladder
      LOG_ERROR("precondition not met");
      break;
   }

   // Made it down the ladder without falling
   retVal = doSomething(data) + doSomethingElse(data);
}
while (false);

The two if statements are the ‘rungs’ of the ladder. If something is not right, you fall off the ladder.

By the time we reach the ‘meat’ of the code, we’re somewhat confident all is well.

I first came across the construct when I started working with the Adobe InDesign C++ SDK. The whole SDK and all code samples use this construct.

My initial reaction was ‘yikes!’.

But then I did a lot of work with the InDesign SDK, and I started to see some advantages of the construct, and now it’s part of my ‘regular’ toolkit. I like it because it works in many languages and environments.

I’ve since refined my approach a little bit more; I’ll get to that further down.

Advantages

For me, the advantages of the construct are:

Avoid Line Wrapping

I can ‘flatten’ nested if statements.

If I have a deeply nested if I will rewrite it to use the condition ladder, which tends to shift back the code back to the left.

Since I’ve started using the condition ladder, I can write all my code within lines of no more than 80 or 132 characters.

It takes a bit of refactoring, and ‘flipping’ of conditions, but once I am done, my code will generally be an almost linear sequence of rungs down a condition ladder.

Simple example.

if (something) {
  if (somethingElse) {
    doSomething();
    if (somethingMore) {
      doSomethingMore();
    }
  }
}

vs.

do {

  if (! something) {
    break;
  }

  if (! somethingElse) {
    break;
  }

  doSomething();

  if (! somethingMore) {
    break;
  }

  doSomethingMore();
}
while (false);

In the above example, the condition ladder is only nested two levels, whereas the original is nested 3 levels.

Furthermore, I have more fine control of where I want to break into the debugger.

In the first bit of code, it would be a bit more cumbersome to set a break when ! something.

With the condition ladder, I can put a simple breakpoint on the break;

Single Return

I prefer that my methods and functions only have a single return at the end of the function.

That way I can confidently set a breakpoint on that line, and know that the execution will ‘hit’ that line and I can inspect the return value.

To ascertain that, I tend to combine the condition ladder with a nested try/catch to make sure the code cannot escape my debug session when something throws.

There are arguments for and against that; my approach is that exceptions should be exceptional, and if my code throws it’s because of something I did not anticipate, and things are really bad. See Exceptional!

A method might look similar to this:

int someFunction(const someType& someParam) {
  int retVal = 0;
  LOG_ENTRY();
  do {
    try {
       ... condition ladder rungs ...
       ... calculate sumethin' ...
       retVal = theResult;
    }
    catch (std::exception& e) {     
      LOG_ERROR_WITH_DATA("throws %s", e.what());
      retVal = BAD_RETURN_VALUE;
    }
  }
  while (false);
  LOG_EXIT();
  return retVal;
}

That return retVal is the only place where the method exits, so I can confidently put a breakpoint there and know I’ll hit it.

Inspect Intermediate Results

I also like condition ladders because I find it easier to help debugging and readability by naming intermediate results.

So instead of

if (something() && (somethingElse() * A_CONSTANT) > SOME_LIMIT) {
...
}

I’ll tend to write

auto someThingResult = something();
auto someValue = somethingElse() * A_CONSTANT;
if (someThingResult && someValue > SOME_LIMIT) {
...
}

In a debug session, I can now easily inspect the ingredients before performing the if.

I find that introducing such intermediates is easier when there is a condition ladder.

And in practice, a good compiler or interpreter will generate the same or nearly the same production code, and this is often ‘speed-penalty-free’.

Alternatives

There are alternatives to condition ladders.

One of them is ‘early return’. The code is not wrapped in an unsightly do { } while(false); and instead of using break; to fall off the ladder, you use return; to return early.

I am not a fan, but that does not mean it’s a bad approach.

I don’t like it because it makes my debug sessions more frustrating.

Also, I occasionally want to tweak the return signature of my methods, and if there are multiple returns, they might all need tweaking.

Another alternative is nesting your if statements, but I often get frustrated when I accidentally get my { and } in a twist.

With a condition ladder, I can avoid deeply nested {}.

Tricklets

As time goes, I often try out little improvements. They’re not real, full blown ‘tricks’, so I jokingly refer to them as ‘tricklets’ (which actually a real word, but with a different meaning).

CONDITION_LADDER_EXIT

One tricklet is to define a constant whose value is false. This constant has a name like CONDITION_LADDER_EXIT.

That way I can write

do {
...
}
while (CONDITION_LADDER_EXIT);

and I can then add documentation to a comment near the definition of CONDITION_LADDER_EXIT.

That way, I don’t have to add any comments explaining what a condition ladder is.

Whoever reads the code will spot CONDITION_LADDER_EXIT, go ‘huh?’, right-click it to view the definition of the constant, and bingo! there’s the explanation.

Macros

In C++, I make extensive use of macros to visually simplify condition ladders.

These macros condense recurring patterns in my code and make them one-liners rather than ‘fluff up’ the code with boring, repetitive stuff.

I love macros, because I can easily switch between multiple ‘versions’ of macro-definitions: in debug versions, there is extensive checking and logging, in production versions, pure debug code can be stripped.

A made-up non-functional code extract (I added some fake lines to serve as examples).

IOMNodePtr EvalScriptTask::call(
    const OMStoredPtr& wrapped_this, 
    const OMScopePtr& scope, 
    const std::string& scriptFilePath, 
    HandleReturnValue handleReturnValue)
{
    IOMNodePtr retVal;
    
    BEGIN_FUNCTION;
    
    PRE_CONDITION(scriptFilePath.length() > 0, FUNCTION_BREAK);

    TaskPtr task = 
        factory(
            wrapped_this,
            scope,
            scriptFilePath,
            handleReturnValue);
    SANITY_CHECK(task, FUNCTION_BREAK);
    
    EvalScriptTask* evalScript = dynamic_cast<EvalScriptTask*>(task.get());
    SANITY_CHECK(evalScript, FUNCTION_BREAK);
    SANITY_CHECK(
        evalScript->runToCompletion(), 
        FUNCTION_BREAK);

    retVal = evalScript->fRetValNode;

    END_FUNCTION;

    return retVal;
}

The BEGIN_FUNCTION/END_FUNCTION macros combine logging of entry and exit into method, and also contain an implicit condition ladder.

Then, macros like SANITY_CHECK, PRE_CONDITION combine the rungs of the ladder.

PRE_CONDITION is used when something must be true before we can perform the function, but it is not an error if it is not true. If it is not true, we simply bail out and fall off the ladder.

SANITY_CHECK is used when something must be true before we can perform the function, and it is an error if it is not true. If it is not true, we log the problem, then bail out and fall off the ladder.

This makes the code less ‘wordy’.

Note that macros like SANITY_CHECK are not allowed to be suppressed completely (e.g. for a compact production version) if the code called within the macro invocation has side effects.

I resolve this with a variant, OPTIONAL_SANITY_CHECK, which I will only use on code that has no side-effects, so the whole OPTIONAL_SANITY_CHECK(...) can ‘poof!’ away without problems.

FUNCTION_BREAK kind of boils down to a break; but there is more to it. A normal break would not be able to reach the bottom of the function if it is inside, say, a while loop. The FUNCTION_BREAK macro has provisions so it bails out to the bottom of the function even when it is nested inside another construct (while/for/switch...).

Next

If you’re interested in getting help with automating some part of a Creative Cloud-based workflow, please reach out to [email protected] . We provide developer training, we can create custom scripts, plugins and plug-ins, large and small, to speed up and take the dread out of repetitive tasks…

If you find this post to be helpful, make sure to give me a positive reaction on LinkedIn! I don’t use any other social media platforms. My LinkedIn account is here:

https://www.linkedin.com/in/kristiaan

Creative Coding Collective

I’ve set up a questionnaire. If you’re interested, fill it out – I’ll use the email addresses to email out links to the meeting recording afterwards.

The meeting time is below; I’ve converted it to a few different time zones for convenience.

New Zealand: 06:00-07:00 NZST, Friday, June 27
US Pacific Coast: 11:00 AM-12:00 PM PDT, Thursday, June 26
US East Coast: 2:00 PM-3:00 PM EDT, Thursday, June 26
Brussels, Europe: 8:00 PM-9:00 PM CEST, Thursday, June 26
India: 11:30 PM-12:30 AM IST, Thursday-Friday, June 26-27

The meeting link is in the questionnaire:

https://forms.gle/vW2cRphLitXeuVaK6

Who Can Benefit?

  • Developers & Studios – Solo devs, small teams, consultants, and technical specialists
  • Creative-Tech Professionals – Designers who code, creative technologists, and workflow automation experts
  • Industry Veterans – Platform specialists, enterprise consultants, former agency leads, and mentors
  • Creative Leaders – Design directors, production managers, and anyone validating market needs

The Common Thread: Whether you build it, design it, manage it, or need it – if you’re working at the intersection of creativity and technology, you belong here.

How It Happened

I was one of the organizers of the Creative Developers Summit in Phoenix, and as part of the 16th yearly summit, we had a round table to discuss various challenges we all share.

The Creative Developers Summit is an annual gathering of developers/automators, mostly working with Adobe Creative Cloud applications. More info here:
https://thoughtbridg.es/why-you-need-to-attend-creative-developers-summit/

At the round table, we all got a bit excited at the prospect of getting more organized and tackling more substantial issues. More info further down.

Keeping The Momentum

In order to keep the momentum going, I am organizing another online-only round table.

This round table will be recorded, so if you’re unable to attend you can review afterwards.

I am first trying to determine the best time that gets the most attendees.

As a global un-organization, the sun never sets on the Creative Coding Collective, and that means there is no good timezone to get everyone on board.

So, please visit

https://doodle.com/group-poll/participate/dRjGvVEd

and register when you could attend. I’ll announce the agenda and the actual date and time closer to the 27th or 28th.

I’ve set up a questionnaire to help determine the agenda of the meeting. Please fill it out!

https://forms.gle/vW2cRphLitXeuVaK6

What Is The Creative Coding Collective

The related Creative Developers Summit (https://creativeproweek.com/phoenix-2025/creative-developers-summit/) is something that is driven by a loose ‘swarm’ of creative developers.

This swarm also runs a private Slack, and over the last two decades, it’s grown to be 700+ members.

It’s a volunteer un-organization, full of helpful and highly knowledgeable people.

As a result of the domain name CreativeCodingCollective.org being available, I’ve taken it upon myself to name our swarm: The Creative Coding Collective.

Woulda, Shoulda, Coulda

One of the results of the round table discussions was that we should become more organised, more substantial, more inclusive, and use the power of our swarm to achieve more results in a number of areas.

One idea was to form a guild or a union, get a stronger voice for advocacy and lobbying, become better at making our craft visible, market better…

My personal takeaway was that

  • we all agreed
  • but we agreed on ‘shoulda, coulda, woulda’.

Adobe shoulda, we coulda, we shoulda, Adobe coulda, Affinity shoulda. Also, a lot of ‘impossible’. And if you start from the premise that something is impossible, it is.

I put some thought into that. We’ve had 16 Creative Developers Summits by now, and despite having no financial or legal resources, we’ve achieved some things as a swarm.

However, often our meetings degraded into complain-fests and a big pile of great ideas, quickly forgotten, never to be executed.

Different Approach

I decided to take a different approach, and instead first concentrate on how we can make the shoulda, coulda, woulda into will, can, shall.

I want to change cries of ‘Impossible!’ into questions of ‘How?’.

If someone says something is impossible, turn it around and follow up with the question ‘How can we do it anyway…’.

Me, personally, I am doubtful we can start by setting up a full-fledged not-for-profit. It’s a great goal to have, but I fear it would be too much too soon, and would peter out.

Hence, I thought about ‘how?’. Here’s what I think can work: I want to start with a divide and conquer strategy.

Break the problem into small pieces, so just a few people can tackle individual tasks, pool our resources, brains and effort, and get it off the ground.

To do that, we need some tools to manage various initiatives. Situation, Mission, Execution, Administration and Logistics, and Command and Control.

Rather than take a big bite, I want to take many small bites and tackle various initiatives as a swarm, where each of us does a little bit, and collectively we achieve substantial results.

Setting up a ‘real’ non-profit or guild can be one of the first initiatives we can then attempt to run within the collective.

Crank-Starting It

Our swarm has no financial structure, there is no substance to it.

‘Impossible!’.

No!!! Instead we have to ask ‘How?’.

I have tried to find a way to kick-start things without spending (much) money.

What I’ve done is:
– register a domain name (CreativeCodingCollective.org). That means that I’ve unilaterally ‘named’ our swarm ‘The Creative Coding Collective’. Feel free to disagree and join the round table!
– set up a mostly private WordPress site that we can use to project-manage the various initiatives. Slack is great for interactions, but it does not work well for managing concurrent projects and initiatives.
– Register a Github persona so we have something to put collective projects into (@CreativeCodingCollective42).

The site at https://hub.CreativeCodingCollective.org is up but near empty – I still have a bunch of scaffolding to create.

I am still fleshing it all out. At the moment I am not interested in the woulda coulda shoulda: my focus is entirely on setting up some infrastructure that we can use to get started.

Once we have the infrastructure, we can start formulating various initiatives and projects, assign teams, track progress,…

So, yes, I know there are better tools out there. Yes, I know the logo stinks. Help us getting a better one!

WordPress is not great, but it will do to get started.

My selection of tools is not driven by ‘the best tool’, but the ‘the affordable tool that might be good enough’.

I am currently financing this out of my own pocket, in my own time, and as a one-man-band there is only so much I can spare. My hope is that once the scaffolding is up, we can bootstrap and more people can pick up tasks.

Why WordPress?

I picked WordPress because it will allow me to do the following:

  • Have some public-facing content: pages that can be viewed and accessed by guests to the site.
  • Have private content: pages that can only be viewed by members of the collective
  • Have structured content: we can organize things into a hierarchy: projects, member bios, help wanted, advocacy, SDKs and APIs, documentation, conferences, shared booth spaces, a YouTube channel…

I will also probably add a DocuWiki to the offering, so we can use a Wiki structure as well.

Conclusion

Please fill in the Doodle:

https://doodle.com/group-poll/participate/dRjGvVEd

and try to attend the online meeting. It’ll be shortly after the Adobe Developers Live (see https://developer.adobe.com/developers-live/ )

Keep in mind that at this point in time I am not interested in any woulda coulda shoulda.

My focus is firmly on setting up a scaffold that we can use to start and run projects and initiatives as a collective.

I want to set up a system where we can coordinate, focus and channel the raw power of the swarm onto achievable goals.

If you want to be part of the collective, start thinking about what you can do for the collective.

Any thoughts, ideas? Reach out to kris at rorohiko.com.

Be Committed: Why Even Solo Developers Need Git

This is the 14th post in my series: Coding Without the Jargon, where I show how I clean up and gradually improve real-world code.

Source control isn’t just for teams.

Even as a solo developer, I never code without it.

What? Where?

98% of the time, the source code systems I use are git-based.

There is a tiny bit of perforce, and maybe some old and forgotten svn. For most intents and purposes it’s all git.

Most of my code is in a ‘private’ git.

There is very little to setting up a private git ‘server’. Actually, it’s not even a server.

In my case it’s a simple Linux virtual machine in the cloud, with a large hard disk, accessible via ssh with a passwordless ssh-key login.

That’s all there is to it. Of course this private setup does not offer the additional features offered by Github, Bitbucket, but… it’s private. No snoopers, no AI scrapers.

Another large part of my code is on Github, some of it in private repositories, most in public repositories.

And then there are loose odds and ends in Bitbucket, Gitlab, and other ‘git’ pools.

Why?

Except for very small throw-away scripts, I won’t code without being backed by a source code control system.

This is despite the fact that much of the stuff I write is for my eyes only.

Forever Undo

Reason one: it’s my “forever undo”.

Some people have a habit of commenting out large gobs of code ‘for later’ or ‘just in case’. Or sometimes there is code that is unused but left in the project ‘just in case’.

I try not to do that.

Reason 1: it makes it very hard to use a ‘global find’ across a project and not get hampered by false positives in commented-out or unused code.

Reason 2: it also leads to code bloat. Files full of unused functions that still need to be updated, upgraded and tweaked as the code around them changes.

Instead, I’ll make sure the code has been committed and pushed into the repo, and then I’ll simply strip away gobs and blobs of code with impunity.

If I ever need it back, I just pull it from the repo. Clean code, no regrets. Keep it lean and mean.

What’d I DO?

The second reason I use the source code control system is as a diagnostic helper.

I’ll be coding away for a few hours, and all seems well, and then I’ll do a proper test of the system, and it’s broken in a weird and wonderful way.

What did I DO? Aaaargh.

Thanks to the source code control system, I can roll the code back to prior states, and roughly determine when I broke it.

Once I know in which commit the breakage happened, I can simply use the code comparison features of my git client to see what changes I made, and figure out how I broke the system.

What I DID!

The third reason I use the source code control system is to keep a readable log of changes to the system.

Each time I commit, I will write some sensible commit message, explaining what I did.

This helps my future self retrieve the correct commit where I made changes to a particular area of the project.

Granularity is the key

Being granular is important in order to make the source code control system work for all three use cases I just mentioned.

Imagine I were to only commit into a single large commit at the end of my workday.

Such commit would combine lots and lots of changes all over the project and it would be very hard to zoom in on any particular change.

Instead, I’ll commit very frequently. Baby steps, say, every 15-30 mins or so.

It often happens that in that time, I still will have covered multiple ‘things’.

I’ll then use the features of my Git client (e.g. SourceTree, SmartGIT, VSCode…) to ‘split’ the changes I made into multiple separate commits, as to avoid mixing different kinds of changes into a single commit.

Conclusion

Use a source code control system, even as a sole operator. Commit granular commits often and document what each commit does in the commit message.

In return, as a sole operator, you gain a forever undo, a diagnostic helper, and an neat timeline overview of the changes made to the project.

Of course, as a sole operator it’s possible to work without source code control system. I would not do it, but it’s feasible.

But when you are working with a team on a larger project, a source code control system is a must. Not using one is madness.

Murphy is always lurking and you cannot have enough safety nets. Murphy strikes when you least expect it.

In addition to git, I also rely on Time Machine on my Macs, as well as a private cloud storage server (ownCloud), as well as BackBlaze, as well as a regular full disk backup…

It might sound paranoid, but I have the scars to prove that these safety nets are necessary. None of such additional services should be considered replacements for what git offers. I’ve been saved by these extra safety nets when disaster struck in an unexpected way.

Next

If you’re not using a source code control system, stop what you’re doing and set it up. You can roll your own, use Github, whatever. Start today.

If you’re interested in automating part of a Creative Cloud-based workflow, please reach out to [email protected].

We provide training, co-development, mentoring for developers working around Adobe’s Creative Cloud and InDesign Server.

We can also run workshops – have a look at Workshop: Mastering Automation for Real-World Adobe Workflows.

If you find this post to be helpful, make sure to give me a positive reaction on LinkedIn! I don’t use any other social media platforms. My LinkedIn account is here:

https://www.linkedin.com/in/kristiaan

Name It!

The difference between data and actuallyUsefulReference.

This is the 13th post in my series: Coding Without the Jargon, where I show how I clean up and gradually improve real-world code.

You’ll often hear that it is important to add comments to your code. I don’t think that is quite true.

I see comments as a means to an end. The goal is to make the code more readable, and comments are just one way to try and accomplish the desired result, which is readable code.

Rule #13: Put effort into naming things.

One of my recurring messages is: have some coding convention and be consistent. A convention for naming things is part of the coding convention.

For whatever they’re worth I’ll describe some of my naming conventions, which reflect my personal quirks and preferences. Don’t take what follows as gospel.

Comments

The problem with comments is that they are not coupled to the code.

Their only relation to the code is their position within the code. That’s it.

Initially, a comment will clarify the code.

Often the code then changes, and the interspersed comment gets overlooked and remains unchanged.

Finally, what starts out as a helpful comment inadvertently becomes an unhelpful lie.

At best, a comment is a liability: the software developer is liable to keep them updated as the code evolves.

At worst, comments are booby traps that cause harm at undesirable moments.

Better Than Comments

I put a lot of effort in judiciously naming and renaming the various elements on my code, as there are: variables, functions, constants…

I find it useful to re-read and tidy up my code at most a few days after writing it. By then, I still remember enough about how it all works, but any details that are not obvious will become apparent to me as I try to make sense of what I’ve written.

A helpful use of comments is to use structured comments to mix API documentation into the code. I can run a documentation-generation tool to extract that info into readable documentation. That works great.

In rare circumstances, I’ll use a comment to point out a ‘gotcha’: areas where the code looks like other code, but is subtly different. Alert the reader to the fact that there is some detail they might not expect.

Apart from structured documentation and gotcha comments, I will only add a comment if I really, really cannot make the code self-explanatory by reshuffling statements, renaming elements, adding white space, dividing into smaller chunks, grouping larger actions into bite-sized thoughts…

Naming

Things I consider when naming things: I’ll rarely, if ever use variable names like i, j, idx, count,… Those names convey very little meaning.

I don’t have a rigid approach to how the names are spelled out. UPPER_CASE, lower_case, camelCase, first letter upper- or lowercase… that all depends on the project and language at hand. There might be existing ‘best practices’, which I’ll try to follow.

Like As Like

What I am rigid about is the consistency. I want similar elements to be named similarly, to draw attention to the fact that they’re similar. Below: yes are variable names I might have used in my code, no are inconsistently named variables I’ll try to avoid using within the same project.

It’s not about the names themselves, but rather about the consistency in how the names are chosen.

yes: authorPtr vs. bookPtr
no: authorPtr and pBook
yes: numParagraphs and numCharacters
no: numParagraphs and charCount
yes: paraCount and charCount
no: numPara and charCount
yes: STATE_IDLE and STATE_EXPECTING_LETTER
no: IdleState and STATE_EXPECTING_LETTER

Depending on the context (language, complexity of the project), I might also cram some ‘meta-info’ into the variable name.

Parameters

Most of the time, I prefer to use call-by-value behavior, and use return to hand back results, but for some projects, I use call-by-reference and have some more complicated mechanism for passing data in and out of methods.

In those cases, I might use a name prefix (in_…, io_…, out_…) on function parameters to differentiate parameters that are ‘by value’ vs parameters that are ‘by reference’.

Pseudocode example:

function doSomething(in_someValue, io_someReference, out_someResult) {
...
}

in_... means: used for data being passed in.
out_... means: used for data being returned.
io_... means: data being passed in, then modified in method, so different data is being returned.

I will express things like ‘const-ness’ through language features of the language at hand.

Meta-Information In Names

I don’t use Hungarian Notation, but I do use some similar ideas, where the name of the entity reflects some additional meta-information about it, meta-information that helps a human interpreter of the code understand some of the less-obvious relations in the code.

An example: I have variables that store JSON strings, and corresponding variables that store a deserialized object. In my code I will use names like thingJSON and thing which indicate to me that thingJSON is a string which can be passed into some JSON parser, and thing is the deserialized result.

For pointers to things I might use either a p... prefix or a ...Ptr suffix – mostly depending on whatever convention is already in force throughout the project. I have no real preference for one over the other. The main thing is: it needs to be consistent throughout the project.

I’ve spent a lot of time writing C and C++ code, and one habit I formed is to use ALL_UPPERCASE for constants and macros. It’s a habit, in my opinion not better nor worse than using prefixes like c... or k... for global constants.

Booleans

In the same vein, I will try to name variables and functions that handle boolean values such that their name reflects their boolean nature. isValid, hasDelimiter, isCachedValueAvailable,…

Collections

With collections, I’ll try to reflect the type and structure of the collection. Instead of a bland variable name like users which does not hint at how those users are stored, I’ll prefer to use variable names like userList, userMapByName, userSet… to hint at the underlying data structure.

Namespacing

I also like using some namespacing mechanism, especially for stuff that is globally shared.

Rather than make everything global, I prefer to use namespacing techniques. Some languages (e.g. C++, Java) have namespacing features built-in and then I’ll use those.

Other languages don’t have namespacing in which case I’ll use some approximation – e.g. in JavaScript I’ll use global objects or functions and stash stuff in them. In Xojo I’ll use modules or classes to stash global objects.

This increases the odds that the code is re-usable and reduces the risk of a clash with other code.

Some sample code from a project generated by CEPSparker:

if (! SPRK.C) {
    SPRK.C = {}; // stash constants here   
}

...

SPRK.C.APP_CODE_AFTER_EFFECTS                   = "AEFT";
SPRK.C.APP_CODE_BRIDGE                          = "KBRG";
SPRK.C.APP_CODE_DREAMWEAVER                     = "DRWV";
SPRK.C.APP_CODE_FLASH_PRO                       = "FLPR";
SPRK.C.APP_CODE_ILLUSTRATOR                     = "ILST";
SPRK.C.APP_CODE_INCOPY                          = "AICY";
SPRK.C.APP_CODE_INDESIGN                        = "IDSN";
SPRK.C.APP_CODE_PHOTOSHOP                       = "PHXS";
SPRK.C.APP_CODE_PHOTOSHOP_OLD                   = "PHSP";
SPRK.C.APP_CODE_PRELUDE                         = "PRLD";
SPRK.C.APP_CODE_PREMIERE_PRO                    = "PPRO";

Aligning and Sorting

Some people prefer their code editor to do the formatting for them, but I like to format by hand. I like vertically aligned code and monospaced fonts. I also like to have similar things alphabetically sorted.

Modern code editors like Sublime Edit or VSCode make it very easy to keep things sorted and aligned.

This helps me visually spot discrepancies. Imagine I added a new constant and made a consistency mistake, like:

SPRK.C.AP_CODE_EXPRESS                          = "EXPRESS";

(note AP_ instead of APP_), it would visually stand out like a sore thumb.

Intermediate Results

A powerful technique is to store intermediate results into temporary variables with meaningful names.

Rather than write out long, complicated expressions, I’ll evaluate and store the subexpressions and then combine them in a final expression.

This has two advantages:
– it can explain the code better, without need for a comment
– it makes the code easier to debug

During a debug session, I can inspect the intermediate result, rather than being forced into an all-or-nothing situation.

Sample snippet in JavaScript: instead of

        var padding = new Array(len - retVal.length + 1).join(padChar)
        retVal = padding + retVal;

I’ll write:

        var padLength = len - retVal.length;

        var padding = new Array(padLength + 1).join(padChar)
        retVal = padding + retVal;

Regular Expressions

Naming regular expressions can also be helpful.

Regular expressions are notoriously ‘write once, read never’ constructs.

Once you’ve figured it out, you never want to dissect it again. Using named constants will help make the code readable. JavaScript example:

const REGEXP_TRIM                              = /^\s*(\S?.*?)\s*$/;

Depending on the context, there can also be a performance benefit: regular expressions need to be compiled into an internal representation, which can be performance-intensive, and using a named constant rather than a literal can be faster because the regular expression only needs to be compiled once.

Next

Putting some effort in naming coding elements in a consistent and helpful manner pays off.

If you’re interested in automating part of a Creative Cloud-based workflow, please reach out to [email protected].

We provide training, co-development, mentoring for developers working around Adobe’s Creative Cloud and InDesign Server.

We can also run workshops – have a look at Workshop: Mastering Automation for Real-World Adobe Workflows.

If you find this post to be helpful, make sure to give me a positive reaction on LinkedIn! I don’t use any other social media platforms. My LinkedIn account is here:

https://www.linkedin.com/in/kristiaan

Writing Efficient Code Isn’t Always Technical

When it comes to writing efficient code, the choice of the algorithms can have tremendous consequences.

This is the 12th post in my series: Coding Without the Jargon, where I show how I clean up and gradually improve real-world code.

Rule #12: The biggest efficiency gains happen before the first line of code.

Know Your Tools

Suppose I have a large ordered table of names, and I need to find a particular name, I could perform either a sequential search of the table, or I could use a binary search approach.

In many situations the binary search will be much, much faster than a sequential search.

The important thing is: when faced with a certain task, it helps to know of some basic patterns and algorithms that might be more efficient for that task.

This is where AI can help in a big way. I’ll fire up Claude or ChatGPT or DeepSeek or whatever, and have a conversation, and ask it to teach me about promising algorithms that I am not familiar with, making it a learning experience!

When You Only Have A Hammer…

Before applying any algorithm or pattern I try to take a step back and take the broader view.

For example, if the table is small, a binary search is often overkill, and can end up being substantially slower than a straight sequential search.

If the table isn’t already ordered, and sorting would be required just to use binary search, using binary search is often not worth it.

I try to avoid using a cannon to kill a mosquito.

If I know a table will only ever have, say, at most 5 elements in it, I won’t refactor that code with binary search or a B-tree.

Instead, I’ll translate my assumptions into a simple linear search and some sanity checks with logging, so the code will verify the table size and tell me should the table turn out to be larger than expected.

The important thing is to take a step back and do a bit of thinking and research.

Is It Worth Fretting Over?

It is important to look at the wider picture.

Sometimes a brute-force search is fine. Sometimes you need to rewrite your data model. The hard part isn’t coding efficiently. It’s knowing when efficiency matters.

Throwaway Code

I often write throwaway code, for example when I am doing a database conversion, or some data massage.

I regularly build programs or scripts that are only run a few times, then discarded.

I’ll only dig in when the potential time savings start surpassing the effort needed to implement them.

The difference between 0.1 sec and 120 sec execution time is not relevant, especially when compared to the extra effort to implement a more efficient algorithm.

On the other hand, if I have a script that needs 72 hours to execute and with a few hours work I can bring that down to 15 mins, that’s worth it.

Existing Third Party Code

There might be existing libraries/modules/source code examples that provide the functionality I need.

Whether I rely on existing third-party code depends on who made it, how much I need it, what it does, and how deeply nested and far reaching the dependencies are.

Pulling in external modules often equates quick relief, followed by long suffering.

Think of it: once I pull in external modules, I allow someone that is not me to have some control over my software.

I’ll try very hard to rely on as few external dependencies as possible. I will try to avoid using package managers in the Node.js eco-system.

The first reason: I want to avoid ‘update hell’: I hate it when I pick up a dormant project a few months later, and I am forced to spend a day or two to catch up and update my code for all the updates and deprecations in the external modules I was using.

Second reason: safety and security. The world of open source has changed for worse over the last few decades, and I don’t have much trust in such eco-systems.

Many of these ecosystems feel like houses of cards. One compromised package deep in the tree, and a Trojan horse walks right in.

A few examples to see what I am on about:

https://cloud.google.com/blog/topics/threat-intelligence/supply-chain-node-js/
https://blog.npmjs.org/post/180565383195/details-about-the-event-stream-incident
https://en.wikipedia.org/wiki/XZ_Utils_backdoor

Existing Private Code

For a wide range of functionality, I prefer to roll my own.

Over the decades, I’ve built a broad library of stable, reusable code that can easily be re-purposed within new projects.

I know my code is not bug-free, but it’s mature and stable, and I retain 100% control.

When I need some functionality that I have not covered yet, and I feel confident enough to write my own version, I’ll do just that and extend my private library.

Trusted Third Party Code

The other side of the coin: I also try to avoid the ‘not invented here syndrome’, or to bite off more than I can chew.

For example, I would not dream of writing my own crypto or compression modules.

There are just too many pitfalls and ways to mess things up.

Things like OpenSSL, libtiff, libjpeg, zlib, boost, MariaDB, PostgreSQL, SQLite… have less of the ‘wild west’ mentality than the Node.js eco-system.

They’re well maintained, mature and stable, and security issues tend to be resolved promptly. Breaking API changes are very rare.

I try to strike a balance between being too cautious and being too trusting by relying on just a few external libraries.

The way I currently approach external dependencies: I need to be able to tell you, off the top of my head, exactly what the external dependencies in my projects are.

If I cannot do that I know there are too many.

If I can do that, I know there will be only a few, and they’re easy to track.

With a Node.js package manager that would be rarely the case as the dependency trees are rooted way, way too deep and wide.

Next

In the end, efficiency isn’t just about shaving milliseconds. It’s about making thoughtful decisions that pay off across the full life of the project.

If you’re interested in automating part of a Creative Cloud-based workflow, please reach out to [email protected].

We provide training, co-development, mentoring for developers working around Adobe’s Creative Cloud and InDesign Server.

We can also run workshops – have a look at Workshop: Mastering Automation for Real-World Adobe Workflows.

If you find this post to be helpful, make sure to give me a positive reaction on LinkedIn! I don’t use any other social media platforms. My LinkedIn account is here:

https://www.linkedin.com/in/kristiaan

Give Me Space!

The most important character in my code is whitespace (space, tab, newline…).

This is the 11th post in my series: Coding Without the Jargon, where I show how I clean up and gradually improve real-world code.

Rule #11: Whitespace is structure, not decoration.

Adding more whitespace to my code does not slow down my code’s execution speed in any significant way.

But it can hugely speed up the comprehension speed of a human reader that needs to digest and understand my code.

Example

I asked Claude 3.7 to dig up some real-life examples; I picked this one to show what I mean. I don’t know where it came from, and that is not important.

bool isValidFile = (fileSize > 0 && fileSize < maxSize) && (fileType == "jpg" || fileType == "png" || fileType == "gif") && !(filename.contains("..") || filename.contains("/")) && (uploadedTime > lastMaintenanceTime) && (userId == ownerUserId || userRole == "admin" || (userRole == "editor" && sharedWithUser)) && !isCorrupt;

The actual formatting of a visually restructured version is a matter of personal preference; there are a million other good ways to format such expression. Assuming I’m only allowed to add whitespace, and change nothing else, here’s how I might restructure it.

bool isValidFile =
    (
        fileSize > 0 
    && 
        fileSize < maxSize
    ) 
&& 
    (
        fileType == "jpg" 
    || 
        fileType == "png" 
    || 
        fileType == "gif"
    ) 
&& 
    ! (
            filename.contains("..") 
        || 
            filename.contains("/")
    ) 
&& 
    (uploadedTime > lastMaintenanceTime) 
&&
    (
        userId == ownerUserId 
    || 
        userRole == "admin" 
    || 
        (
            userRole == "editor" 
        && 
            sharedWithUser
        )
    ) 
&& 
    ! isCorrupt;

Try to grok both statements.

The point is: the restructured version is easier to mentally consume. You can spot the various sub-expressions and easily digest each of the sub-expressions in turn.

Don’t Mislead

Adding more whitespace can be very helpful, but it can also be misleading, and I am extra careful and will triple-check my work to make sure the formatting matches the actual code structure.

Here are examples of code where poor formatting actively misleads the reader:

if (a > 5) 
    increase = a * 0.9;
    total += increase;

if (b > 7)
    increase = b * 0.87;
    total += increase; 
  
var x = 
    a > 2
&&
        c > 3
    ||
        d > 5;

// bug! y will be 3, not 7
var y = 3
    + 4; 

(I know the code does not make much sense, I am just making a point).

These could easily mislead a human reader to mis-interpret the code

In my own defensive coding style, I personally never use if statements without a { some code... } compound statement, even if it is just one statement.

Also, I will try to add parenthesis around sub-expressions, even when the operator precedence makes them superfluous.

if (a > 5) {
  increase = a * 0.9;
}
total += increase;

if (b > 7) {
  increase = b * 0.87;
}
total += increase; 
  
var x = 
    (
        a > 2
    &&
        c > 3
    )
||
    d > 5;

var y = 3 + 4;

Conclusion

Whitespace costs nothing, but the clarity it can add is priceless. Use it deliberately, but be careful to not use formatting that misleads the human reader or introduces bugs.

Next

If you’re interested in automating part of a Creative Cloud-based workflow, please reach out to [email protected].

We provide training, co-development, mentoring for developers working around Adobe’s Creative Cloud and InDesign Server.

We can also run workshops – have a look at Workshop: Mastering Automation for Real-World Adobe Workflows.

If you find this post to be helpful, make sure to give me a positive reaction on LinkedIn! I don’t use any other social media platforms. My LinkedIn account is here:

https://www.linkedin.com/in/kristiaan

InDesign, Tables, Scripts, Vibe Coding

Hairy Splits

A story about splitting InDesign tables. You can find out more about TableAxe here:

https://rorohiko.com/TableAxe

Video demo: https://youtu.be/kq2Ilomtgyw

Recently, I had to do some work with tables in InDesign, and had a need to split a table vertically, creating two tables.

I had not had that need before, and I blindly assumed there would be a menu option to do that.

Turns out… no, that does not seem to exist. Ah well, surely there will be a script somewhere that does that? Turns out… nothing that I can find. There are some scripts (I found this one by Peter Kahrel) that are related, but none that had the functionality I was after.

That feels like a real functionality gap to me!

Hey, why not try vibe coding and experience first hand how well that works?

Starting Simple

I used Claude 3.7 and explained what I needed.

I could have tried the amazing Omata Lab’s MATE, but I like to work closer to the metal, so I used ‘raw Claude’.

Claude confidently spit out an ExtendScript, and on quick diagonal reading it seemed to kind of make sense.

Tried to run the script – nah. That did not work. Inspect the script a bit closer, and it turns out Claude ‘imagined’ some handy new DOM methods that don’t actually exist in the real InDesign DOM.

Got into a ping-pong match with Claude, fixed one problem, created a new problem.

Game of whack-a-mole. Eventually I got a splitter script going, but I did not like the result very much, and it took me a bit longer than had I started from scratch on my own power.

This initial script by Claude had some useful tidbits in it, but the script as a whole felt like a one-trick pony.

One useful tidbit I learned: you can pass negative indices to InDesign collections to address elements at the end of the collection – so document.rectangles[-1] is the very last rectangle. Never too old to learn something new.

Put The Thinking Cap On

When I was looking for an existing script earlier, I found some scripts that could split tables horizontally, so I initially did not envision adding horizontal split functionality to my script.

But then I started thinking: what if I made a script that was a one-stop-shop for all kinds of table splitting and merging? I’d surely use such a script if it existed!

Creating a single script to handle all kinds of table split/merge operations looks like a worthwhile endeavour!

Creating TableAxe

So, I started over and built TableAxe.

TableAxe is a script that can split and join tables in InDesign, either vertically or horizontally.

There are a few interesting aspects to TableAxe.

  • No user interface to speak of. The only user interface it presents is a dialog with a message and an OK button
  • A single script handles both merging and splitting
  • Properly handle header and footer rows
  • User manual is built into the script. Script gives helpful feedback.
  • Using PluginInstaller to install/uninstall the script

More info about TableAxe: https://rorohiko.com/TableAxe

No user interface

Two reasons.

  • Developing user interfaces is expensive.

If I wanted TableAxe to have a user interface with fields, checkboxes and buttons, I’d need to either create a UXP plugin or a CEP panel. That’s perfectly feasible, but quite a bit of extra effort. Alternatively, I could use ScriptUI (a built-in feature of ExtendScript), but ScriptUI is no longer well supported by Adobe, and I don’t really want to have my script be dependent on how well ScriptUI still works.

  • A clickable user interface with fields and buttons slows the user down.

In my experience, InDesign users are a fast and ferocious lot. They love keyboard shortcuts and one-click operations. If I can avoid having a user interface, users can drive the script really, really fast. Fly through a document and split five tables in five seconds, that kind of thing.

Single Script

TableAxe is a single script. You just run it, and based on what’s currently selected in the InDesign document, it knows what to do.

It knows whether to split or merge. It knows whether it’s horizontal or vertical. It knows what tables to merge.

That means you can assign a single keyboard shortcut to the script, and that one keyboard shortcut does everything.

Fast, fast, fast and furious.

Properly Handle Header and Footer Rows

Header and footer rows are not ‘split’. They are considered to be part of the ‘table border’. So if you split a table with header and footer rows horizontally, these rows will appear in both tables.

When you merge two tables to make a taller table, TableAxe will verify that the header and footer rows match before merging.

Helpful Feedback

I like scripts that don’t leave the user guessing when something is wrong.

When the current selection cannot be handled sensibly by the script, it will provide helpful feedback to the user.

If the script runs without anything selected, it will display a user manual.

If the user attempts to split through a header or footer row, the script will tell the user that does not work.

If the user attempts to merge two tables that don’t fit together (e.g. wrong number of rows or columns, or mismatched header/footer), the script will point the user to the issue.

PluginInstaller

I am using PluginInstaller to distribute TableAxe.

A TableAxe license is US$4 per seat per year, and I’ve chosen to make the license optional. If the user does not pay for an activation, the tool will continue to work, and remain fully functional.

Before anything else, having people pay me US$4 for a license will not cover my cost of development and hosting.

The real reason for the US$4 is twofold.

In my experience, people appreciate something that they paid for more than the identical thing they got for free.

Flip side, having $4 payments surge in to my account would make me feel appreciated and might entice me to improve on the script.

In my opinion, all too often, people confuse ‘value’ and ‘cost’. People often save hundreds or thousands of dollars with a script they did not have to pay for. I hope to convince at least some end-users that they should value such ‘free/near free’ scripts by the value they bring, not by the amount they paid for it.

More about this pet peeve here:

https://coppieters.nz/the-value-paradox-in-adobe-ecosystem-development/

Sidenote: What Is PluginInstaller?

PluginInstaller is a component of the Tightener project.

https://PluginInstaller.com

At present, PluginInstaller is in ‘Minimum Viable’ state. There is still a lot of work to do, but it works. I’ve been using it for my own company for over a year now and have been generating revenue from scripts, plug-ins and extensions.

The aim is to fill a gap and create an ‘open installer/packager for all’. All kinds of add-ons, free or commercial, from all kinds of developers, also beyond the Adobe eco-system.

PluginInstaller can be used for free by indy developers. Some of the features:

Store Window. PluginInstaller comes with an optional default store window. Other developers can opt to use this store, or not use it at all, or integrate their own.

Payment Gateway. PluginInstaller comes with an optional default payment gateway for commercial scripts and donationware (currently I’ve implemented PayPal). Other developers can use this payment gateway, or they can integrate their own.

Software Licensing. PluginInstaller handles activations for commercial software, coupon codes, demo versions, donationware, nagware, allows the users to add a fee to their payment…

Source Code Protection. PluginInstaller embeds protection features similar to JSXBIN, ZXPSignCmd, packaging… in a single packaging program. It protects ExtendScript source code and allows the developer to manage demo versions, activations, time bombs…

Sell ExtendScript: PluginInstaller makes it commercially viable to sell ExtendScript solutions. Many needs can be served with simple scripts that don’t need full-fledged CEP or UXP solutions.

Currently, PluginInstaller handles InDesign ExtendScript, InDesign UXPScript, InDesign CEP panels. More to come as time and money permit.

Next

If you’re interested in automating part of a Creative Cloud-based workflow, please reach out to [email protected].

We provide training, co-development, mentoring for developers working around Adobe’s Creative Cloud and InDesign Server.

We can also run workshops – have a look at Workshop: Mastering Automation for Real-World Adobe Workflows.

If you find this post to be helpful, make sure to give me a positive reaction on LinkedIn! I don’t use any other social media platforms. My LinkedIn account is here:

https://www.linkedin.com/in/kristiaan

Elegance Is Not A Goal

This is the 10th post in my series: Coding Without the Jargon, where I show how I clean up and gradually improve real-world code.

Rule #10: There is no prize for elegance.

If elegance serves readableness, I’ll take it.

But I don’t strive to write elegant code. I will do it if I can, or if the problem demands it, but in run-of-the-mill code, I won’t chase elegance at the cost of clarity.

In my experience a lot of elegant code is ‘deep code’, and demands serious thought before it can be understood.

One of the most famous examples is the fast inverse square root algorithm from Quake III Arena. This algorithm computes 1/√x approximately 4 times faster than using standard floating-point operations. It uses a magic constant and bit-shifting operations that exploit the IEEE 754 floating-point representation in a mathematically elegant way. It’s nearly impossible to understand at first glance.

Such code is needed and has its place, but only within a very narrow context.

New Is Not Better

I need to frequently hop between multiple environments – ExtendScript, C++, PHP, JavaScript, TypeScript… spanning multiple generations of these programming languages.

One thing I observe is that as programming languages are modernized, they seem to gain new syntax. Languages also seem to be prone to some form of jealousy. For example, Python will add some cool tricks, and soon enough similar constructs will also show up in other languages.

These newer language features often add elegance, yet only sometimes add clarity.

I’ll look at JavaScript next, but the core ideas apply to other environments as well.

JavaScript

Features I Avoid Unless Justified

Take the arrow function => notation for functions in JavaScript. As far as I can tell the main advantage is needing less keystrokes. It makes the code elegant and denser, and readability suffers.

There are also the destructuring features and spread/rest syntax. Yes, more elegant code, but I find myself needing more time to read and understand code that uses these constructs.

Optional chaining: very elegant, but it increases the likelihood of bugs slipping by unnoticed.

Transpilers and polyfills which provide modern features in old versions of JavaScript. My suspicion is that these things come with overheads that must be accepted wholesale. Build processes get more complex.

Features I Like

Some of the code I write will be guaranteed to run in a modern JavaScript context, in which case I can and will use some of the modern JS features.

In that case, keywords like let and const are useful improvements. They do not do much for elegance, but they significantly improve reliability and reduce accidental bugs.

Other positive changes: for…of, default parameters, template literals: all of these can help make the code easier to understand.

What I Value More Than Elegance

Clarity over terseness. I don’t mind repeating myself or adding a few extra lines if it makes the logic easier to follow. To me DRY (Don’t Repeat Yourself) is not dogma – it’s a helpful rule of thumb.

Predictability over novelty. Language features that behave in subtle or surprising ways tend to age poorly. The fewer hidden rules, the better.

Debuggability over brevity. I want to be able to drop into a debugger and understand what’s going on, no extra decoding required.

Next

If you’re interested in automating part of a Creative Cloud-based workflow, please reach out to [email protected].

We provide training, co-development, mentoring for developers working around Adobe’s Creative Cloud and InDesign Server.

We can also run workshops – have a look at Workshop: Mastering Automation for Real-World Adobe Workflows.

If you find this post to be helpful, make sure to give me a positive reaction on LinkedIn! I don’t use any other social media platforms. My LinkedIn account is here:

https://www.linkedin.com/in/kristiaan

Avoid Literals

This is the 9th post in my series: Coding Without the Jargon, where I show how I clean up and gradually improve real-world code.

Rule #9: Try hard to avoid literals.

In nearly all code I write, I need to reference some constants: numerical constants like 3.1415926, string constants like "this" or 'that', color constants…

In the heat of the moment, it’s easy to just type in the number or the string and be done with it.

But I find that avoiding literals in my code and instead using named constants offers multiple benefits.

Disadvantages Of Literals

First, some disadvantages of using literals:

Typos Are Not Always Errors

A typo in a literal is often not an error. The code will compile and execute, but also be wrong.

Example: if I am using ExtendScript and testing whether a variable contains a string, I might write:

if ("string" == typeof v) { ... }

Sometimes, when I am editing code, my cursor is somewhere in the document and I might accidentally hit a key as I clumsily reach for my coffee, inadvertently changing this to read

if ("stxring" == typeof v) { ... }

The code is now broken, and the breakage is not obvious. I might spot this later when I commit the code to a git repository, but that would be a lucky coincidence.

Now, imagine I instead introduce a constant instead of the literal:

const TYPEOF_STRING = "string";
...
if (TYPEOF_STRING == typeof v) { ... }

If I now accidentally hit a key in the same spot, the code becomes

if (TYPxEOF_STRING == typeof v) { ... }

and my code editor will complain. And when I try to run this, an error will occur. The issue won’t go unnoticed.

Accidental Equality Spoils Find-And-Replace

Another issue is that my code often has multiple identical strings or values, some with different meanings.

For example, I might have two database tables (say, CUSTOMER and PURCHASE) that both contain a column CUSTOMER_ID.

Imagine the code is littered with literal strings "CUSTOMER_ID". I will have to carefully read the code to derive from the context which column is being referenced, the one in the CUSTOMER table or the one in the PURCHASE table.

Or, say, I might have multiple strings "green". Sometimes it is a reference to a named CSS color, sometimes it is a string that needs to be inserted into some message that needs to be displayed to the user.

I will do something like this:

const COL_NAME_CUSTOMER_ID = "CUSTOMER_ID";
const COL_NAME_PURCHASE_CUSTOMER_ID = "CUSTOMER_ID";
...
const CSS_COLOR_GREEN = "green";
const NORMAL_STATUS_NAME = "green";

By writing code that uses such named constants instead of the literals, the code becomes more self-explanatory, and there is less room for confusion.

When exploring code (my own or someone else’s) I use my text editor to do ‘global finds’ for interesting strings. Globally finding stuff is a great way to explore a large code base.

If the code is littered with hundreds of literal strings "CUSTOMER_ID" I cannot do a targeted search for only those areas in the code that access the PURCHASE table. I will also ‘catch’ all the code that accesses the CUSTOMER table.

On the other hand, if the code is using named constants, I can simply do a ‘find’ for COL_NAME_PURCHASE_CUSTOMER_ID and find only the areas of the code I am interested in.

Advantages Of Named Constants

Using named constants comes with a few advantages.

Easy To Change The Values

I might have something like:

const CUSTOMER_NAME_ERROR_COLOR = RGB(255,0,0);
const CUSTOMER_NAME_OK_COLOR = RGB(0,255,0);
…
const BUTTON_SHADE_COLOR = RGB(0,255,0);

Note that CUSTOMER_NAME_OK_COLOR and BUTTON_SHADE_COLOR have the same value, but have a different meaning.

Imagine that it turns out that this does not work well for people with red-green color blindness, and I want to change CUSTOMER_NAME_OK_COLOR to something different.

If the code is consistently using such named constants, I can easily tweak a single line of code and change the colors for better contrast.

On the other hand, if the code is littered with references to literal RGB(0,255,0) I need to use my text editor and perform a global find-and-replace.

And this becomes problematic because sometimes RGB(0,255,0) is a button shade color. I need to carefully read lines of code to make sure I am not changing a button shade color instead of a customer name color.

Easy To Read

Using named constants with carefully chosen names helps making the code more self-explanatory.

A simple example: in my own logging code, I support multiple levels of logging, from ‘NONE’ (mum’s the word) to ‘TRACE’ (crazy chatterbox).

Internally, these levels correspond to integers 0 – 4. But the code will use named constants, rather than literal values, which make the code easier to follow.

By judiciously choosing meaningful names, I can avoid having to insert comments.

const LOG_LEVEL_NONE = 0;
const LOG_LEVEL_ERROR = 1;
const LOG_LEVEL_WARN = 2;
const LOG_LEVEL_NOTE = 3;
const LOG_LEVEL_TRACE = 4;
...
function logNote(reportingFunctionArguments, message) {
    if (LOG_LEVEL >= LOG_LEVEL_NOTE) {
        if (! message) {
            message = reportingFunctionArguments;
            reportingFunctionArguments = undefined;
        }
        logMessage(reportingFunctionArguments, LOG_LEVEL_NOTE, message);
    }
}

After a coding stint, I will spend some time renaming variables, functions and constants in an effort to make the code more self-explanatory.

Most modern IDEs have built-in refactoring functionality that allow me to rename things (variables, functions, constants…) and automatically also update all references to them.

Next

If you’re interested in automating part of a Creative Cloud-based workflow, please reach out to [email protected].

We provide training, co-development, mentoring for developers working around Adobe’s Creative Cloud and InDesign Server.

We can also run workshops – have a look at Workshop: Mastering Automation for Real-World Adobe Workflows.

If you find this post to be helpful, make sure to give me a positive reaction on LinkedIn! I don’t use any other social media platforms. My LinkedIn account is here:

https://www.linkedin.com/in/kristiaan

Software Development For The Adobe Ecosystem: A Value Paradox.

After 30+ years developing software, and 20+ years developing software for the Adobe ecosystem, I have some observations about the disconnect between value and compensation.

A Common Client Journey

Step 1: recognition. A creative professional realizes they’re wasting time on repetitive tasks that could be automated.

Step 2: discovery. They find a free script or plug-in that (partially) solves their problem and might manually handle the rest.

Step 3: inquiry. When they reach out about customization, they’re surprised by the cost, typically three or four zeros.

Step 4: roadblock. Without budget authority, the conversation ends or enters a challenging approval process.

The worst scenario? When approval comes but payment doesn’t, forcing developers to implement protections like time-bombing their work.

The Free Software Trap

Releasing free scripts and plug-ins can earn a developer some ‘kudos’ but also creates perception problems.

  • A near-$0 price tag is perceived as “$0 value” rather than “$1000 value at $0 cost”.
  • If the software is free, users expect unlimited free support.
  • Donationware does not work: everyone assumes someone else will donate.

The Communication Challenge

How do we, as developers, effectively convey that free doesn’t mean “without value”? Every script and plug-in represents someone’s time, expertise, and effort.

Custom development at $1000 isn’t “expensive” when it is compared to the true value or the true savings.

What we really want is a fundamental shift in how prospective customers perceive the value of automation in their workflow.

Prompt-Whack-A-Mole

Tools like MATE now use AI to generate custom scripts for non-coding users, seemingly threatening custom development.

As someone who leverages AI for coding myself , I’ve experienced the significant limitations of AI code-generation firsthand.

Beyond a certain complexity threshold, AI struggles with the comprehensive view needed for robust solutions.

Attempting to generate complex scripts becomes a game of “prompt-whack-a-mole”. Fix one issue, another pops up elsewhere.

AI excels at specific, contained tasks but falls short when integrating multiple components or handling edge cases that an experienced developer will anticipate.

This reinforces rather than diminishes the value of experienced developers. We’re not just code writers. We’re architects who understand the entire ecosystem and can design solutions that stand the test of time and use.

Next

If you’re interested in automating part of a Creative Cloud-based workflow, please reach out to [email protected] . We create custom scripts, plugins and plug-ins, large and small, to speed up and take the dread out of repetitive tasks.

If you find this post to be helpful, make sure to give me a positive reaction on LinkedIn! I don’t use any other social media platforms. My LinkedIn account is here:

https://www.linkedin.com/in/kristiaan