Newbie > Newbie: Applying Algorithmic Design



This blog post will discuss some important considerations when designing your programs. Despite computers getting more powerful, it is up to us programmers to choose the most efficient way to solve the problem in front of us. Efficient programming will minimize the time necessary to achieve the desired tasks and make the best use of resources. To be efficient, we must fully understand the task required and the data structures we will be working with. The amount of memory, storage and type, CPU, and even time are all examples of resources. Some may be more costly based on their sparsity.

A data structure is composed of the type of data and its structure or relationship to other data within itself. Primitives and non-primitives are sub-components of data structures. Examples of primitives are integer, real, character, and boolean. These define the data piece of the data structure. Some examples of non-primitive structures are linear and non-linear. Arrays, lists, stacks, and queues are examples of linear lists, while trees and graphs are examples of non-linear.




Each type of data and structure has unique properties that may make it advantageous compared to another.

Algorithmic design is another crucial component for programming and requires you to fully understand how you will interact with the data what actions need to be performed. Some steps will manipulate the data, while others arrange it in a sequence. An algorithm must have the following characteristics. It must be repeatable, end, perform a series of known steps, and generate the correct output. A simple algorithm might be to add 4 to every number in a list, so if we have a list of numbers 2, 3, 4, 5 and we add 4 to each, our new list after running the +4 algorithm will contain 6, 7, 8, 9. The design portion relates to the code that performs the action. Is a loop the best way to achieve the desired output? Maybe a recursive solution is the most efficient way. This leads us to efficiency.

Efficiency is most commonly referred to by how long a task takes, but this is not always true, as time may not be the most costly factor. Suppose CPU was the most costly resource. We may have our task run after hours when CPU is less of a constraint, but let’s suppose the time to complete was the most important. I have a list of values that I want to be sorted, and I may need to sort this multiple times a day because items are continually being added to the list. Maybe we just need the maximum or minimum value or the entire list sorted to prioritize the results. The algorithm we use to sort the list will have a performance impact determined by the number of comparisons our algorithm requires. The effectiveness of an algorithm can be expressed in what is called Big O notation and measures the number of comparisons an algorithm may require. When Big O notation is applied to algorithms, it scores their efficiencies as best, middle, and worst. If sorting, the best case for an algorithm, is the least number of steps it requires to complete the sort. Conversely, the worst is the maximum number of steps needed. The best possible outcome for an algorithm might be one that has nothing to do, meaning the list is already sorted versus the worst; the algorithm will run thru the routine because it has no way of knowing it is completed.

Combining all the elements above should allow you to make an educated guess about the algorithmic design and data structure required to complete your task. One thing to note when it comes to resources, it may be cheaper to increase the amount of memory by purchasing additional ram than to sacrifice drive space. I hope you have found this blog helpful.

Comments