Posted on

The Drama of CA Rule 21: Why Not a Netflix Mini-Series?

The implementation of the smart inverter requirements spelled out in California’s Interconnection Rule 21 is like a streaming media mini-series, complete with the drama one expects from fictionalized life.

The drama started back in 2013 with the establishment of the Smart Inverter Working Group under the auspices of the California PUC. Season One of the series had its ups and downs, but the combination of utilities and vendors worked quite effectively to develop a model for how smart inverters should behave and how the communications with them should be standardized using IEEE 2030.5.

Season One: Harmony

Season One started off on a feel-good note built around stories of working together and solving challenging problems. Major industry players came to some remarkable agreements and produced several documents on how the industry should move forward. Phase One (2014) specified how smart inverters should behave in the absence of any communications and instructions from the grid. This ended up as UL1741SA so that inverters could be certified to the standard. And these functions also were adopted by IEC 61850-4-720, IEEE 1547-2018, IEEE 1815 (DNP3), SunSpec Modbus, and IEEE 2030.5-2018.

Phase Two (2015) specified how the communications would be standardized between the grid operators and the DERs and identified IEEE 2030.5 as the default protocol for these communications. IEEE 1547-2018 also added IEEE 2030.5 as one of the three optional protocols for 1547 certification testing. Phase Two was adopted, and a date set for when any DER system communicating directly with a utility had to be certified to the CA Rule 21 IEEE 2030.5 requirements. That date was initially February 22, 2019 (9 months after publication of the SunSpec Test Specification) and was moved out 6 months in January of 2019 (to August 22, 2019) when it became apparent that the effort to implement the software and the certification program itself was significantly harder than expected. (UPDATE: The date has now been changed to June 22, 2020)

Phase 3 (2016) was the specification of additional smart inverter functions that by their nature did require communications. These include setting of schedules and monitoring the DER status, alarms, etc. These functions also were adopted in IEEE 2030.5-2018 and mostly in IEEE 1547-2018.

Season Two: Discord

Up to this point, the process and work group functioned rather well together. But now we are entering Season Two and the drama begins to unfold.

In February of this year (2019), the California Solar and Storage Association (CALSSA), the primary voice of the industry in the CPUC proceedings, filed a Petition to modify the plans of the utilities for implementing the requirements of Phase 2 and Phase 3. The Petition took the CA IOUs to task for issuing what they call Advice Letters that:

“relied on a consensus process to develop details that were not contained in the Advice Letters. When the investor-owned utilities issued more detailed implementation plans, it became apparent that parties were far from reaching consensus on certain issues. CALSSA requests that the Commission modify the Resolutions to require the advice letters to include more details and not to exceed areas of consensus…”

Where Does IEEE 2030.5 Stop?

One issue that CALSSA raised was about SDG&Es implicit requirement that all smart inverters support IEEE 2030.5 locally as opposed to the SIWGs agreement that communications with inverters managed by an aggregator or building EMS were out of scope of CSIP and therefore did not need to be certified for IEEE 2030.5.

Getting to End-End Assurance – But How?

A second issue was the plan by SDG&E to develop and conduct their own testing to validate end-end performance of smart inverters when controlled by an aggregator or building EMS. CALSSA took strong exception:

“A one-sentence test procedure stated in a utility document that appears to have been hurriedly drafted bears no resemblance to the consensus-based testing protocols that are painstakingly developed by a diverse set of experts. Also, the CEC Approved Equipment List contains 548 inverter models that have been certified to the latest mandatory standard. The IOUs have not demonstrated that they can handle that volume of testing.”

CALSSA suggested an end-end DER test procedure of their own to be used until a formal procedure can be developed and implemented by the NRTLs. In the meantime, they stated, it may be necessary to again delay the date specifying when systems must be SunSpec CSIP IEEE 2030.5 certified.

Not to be bullied by anyone, the IOUs came back with some strong pushback against the CALSSA Petition and raised the ante by claiming that since the new IEEE 1547.1 would solve the end-end testing issue (which it won’t), any certification date for communications should be pushed out at least 18 months until 1547.1 products are on the market.

QualityLogic, SunSpec, and the Commission Staff pushed back on both CALSSA and the IOUs, arguing that their premise was incorrect and that delaying further certification requirements will only cause uncertainty and confusion in the industry and cause a major setback to the goal of integrating DER resources into grid operations in California. The major disconnect is that the utilities are looking for assurance that a message they send will result in the behavior changes of the target inverters (which the current CSIP IEEE 2030.5 testing does not do). And the IEEE 1547 only assures that very specific messages in one of three protocols (DNP3, SunSpec Modbus, or IEEE 2030.5) result in the intended inverter behavior. An inverter need not be (and probably won’t be) tested with an aggregator or building EMS system. So, there is no concept in 1547 interoperability testing of an end-end test.

QualityLogic, in its own comments, elaborated that:

The hope that IEEE 1547.1 will address the end-end testing issue is misplaced. What IEEE 1547.1 will not do is:

  • IEEE 1547.1 will not require that one of the standard protocols is used in the installation and operation of an inverter. While it ensures a “capability” to use one of the protocols, the requirement to use it will be a vendor, utility, or policy specific decision.
  • It does not ensure that the local interface for IEEE 1547.1 testing will be IEEE 2030.5. That will be up to the inverter vendor. If they already have a SunSpec or DNP3 local interface, that may well be the protocol used for IEEE 1547.1 certification.
  • There is no “end-end” testing in IEEE 1547.1. The certification only validates that a correct message in one of the protocols from a simulated aggregator, utility, cloud-based adapter, EMS, etc., will result in the desired performance. There is no testing with a specific EMS, aggregator system, utility DERMS, or any other source that may be sending real instructions to the inverter.
  • The IEEE 1547.1 Interoperability test is not a protocol test. While it ensures that the IEEE 1547 functions can be managed via a specific protocol (including monitoring and scheduling), it does not validate that the rest of the protocol is functioning correctly. That is what is done in a protocol test such as the SunSpec IEEE 2030.5 CSIP test. This means that an inverter can pass a 1547.1 interoperability test but still not communicate correctly with a production server for that protocol.

Season Three: In Progress

Season Two ended in a classic cliffhanger and Season Three is currently in production. The big question Season Three must answer is, “what will be the CPUCs decision on the CALSSA Petition?” The California Public Utilities Commission (CPUC) recently issued a Draft Resolution regarding the CA Rule 21 proceeding on June 6, 2019.

The Draft Resolution, in response to the Petition of the California Solar & Storage Association for Modification of Resolution E-4832 and Resolutions E-4898, proposes to move the compliance date from the current date of August 22, 2019 to January 22, 2020.* It further re-affirms that individual inverters that are behind either an aggregator or a building EMS do not need to be IEEE 2030.5 CSIP certified.

The IOUs had proposed 18 or more months delay but the CPUC did not agree and chose instead a 5-month delay.

As we watched the various comments filed with the Commission, we felt compelled to weigh in and our comments were instrumental in some of the proposed Resolution. For example:

“QualityLogic further argues that a delay in the compliance deadline for Phase Two and Functions 1 (Monitor Key Data) and 8 (Scheduling) of Phase Three would be counter to California’s renewable energy targets. QualityLogic states that, in order to avoid hindering the State’s climate goals and avoid undermining faith in the CPUC’s Rule 21 process, the August 22, 2019 deadline should not be delayed.”

Our primary intent is to support our customers who have been investing in meeting the CA Rule 21 CSIP requirements. We want to make sure that their investments are recognized and valuable by minimizing delays in the requirement date.

Unfortunately for inverter vendors there is still significant ambiguity about what they need to do for protocol support. If they anticipate any direct communications between their inverter and the utility DERMS, they do need to have a certified interface (could be local or cloud based). But IEEE 1547 will require demonstration of local communications with one of three named protocols (DNP3, IEEE 2030.5, or SunSpec Modbus). So, implementing IEEE 2030.5 insures conformance to CA Rule 21 for direct communications and IEEE 1547.

There is another 30-day comment period but once the CPUC issues a draft it is likely to become the formal Resolution with minimal changes. QualityLogic is in the process of a deeper analysis of the 40-page draft Resolution. There are additional testing requirements that vendors and labs will have to pay attention to as well as a directive to leverage IEEE 1547.1 within a specified period after the approval of the standard.

Click here for the complete set of filings before the CPUC, the Petition, as well as the Responses and Replies to Responses, are available here. For the proposed Resolution issued June 6, click here.

Season Three should end when the final CPUC Resolution is approved. Season Four will delve into the latest twists to the plotlines introduced this year.

*As we have expected, on December 24, 2019, the California Public Utilities Commission extended the deadline for complying with the CSIP certification requirements to March 22, 2020 (UPDATE: The date has now been changed to June 22, 2020 in response to the COVID-19 pandemic). This is not a change in policy (products still need to be certified). Rather it is a recognition of the time to complete the complex process of development, testing, certification and CEC listing.

Posted on

Infographic: Our Process

When it comes to delivering quality software, it is vital that no stone is left unturned. That is why you need a team of experts with the skill and expertise to ensure that your application meets the functionality, performance and accessibility standards necessary for your project’s success. Our team has over 30 years of experience following a proven process to help companies achieve software quality success.

our process for qa software testing

The Importance of QA Software Testing

QA testing is a critical project task that can no longer be relegated to the end of the project life-cycle. The amount of time spent on validation directly impacts the user experience, which ultimately affects your bottom line. An effective QA software testing strategy is one that ensures the application can deliver an outstanding customer experience within the defined functional, performance and interoperability requirements. It is one that helps you establish and maintain a competitive advantage based on error-free software that is innovative and feature-rich in areas that are most important to your customers.

How Our Approach is Different and Why We are Successful

We understand that every project is different. That is why we work together with you to develop a detailed custom project plan that is tailored to your project’s needs. That way, our testing effort focuses only on what is essential without any wasted time spent on work that isn’t necessary. Your project team is hand-picked based on the skills required for your project so you can rest assured that your system will receive the expertise needed for a proper assessment.

Working with us, you won’t have to change the way you do things. We align with your processes and integrate with your systems. Whether you are using waterfall, agile, DevOps, or some combination, our team has the experience and expertise to jump right in. We will follow your test plans or write them if you don’t have them. We can collaborate using your bug reporting system or provide you with one if needed. Not only that, our world-class testing labs have the latest connected devices and tools to test every aspect of your software.

When it comes to test automation, we understand that there is no “one size fits all” approach. We will work with you to develop a balanced approach. Our team of experts evaluates the system to identify which features lend themselves to test automation and which do not.

Our testing environment is designed to grow as needed. We utilize the latest technologies that allow us to scale our testing environment. Not only that, we can scale quickly with minimal costs to you.

Why Work with our Software Testing Company?

QualityLogic has over 30 years of QA software testing experience across a variety of industries. During this time, we have built custom QA tools, frameworks, and methodologies that we’ve used to help companies improve their software QA process. Thereby allowing organizations to accelerate their development cycle and deliver better software. Our full range of services include:

  • QA assessment
  • Planning
  • Test case development
  • Test automation
  • Functional testing
  • Performance
  • Compatibility
  • Interoperability
  • Usability
  • Accessibility
  • Test utility and tool development

You are in complete control when you work with us. Our flexible services model allows you to make changes at any point in the project. You can accelerate or decelerate the schedule and change the number or the types of resources needed. Not only that, you determine how we can best help. We can provide resources on a project by project basis or provide dedicated QA teams. Additionally, in all models, we can increase the team size for a specific time period to help with surges or spikes. We offer you all of this with no long-term contracts, no change fees and a performance guarantee. If you aren’t satisfied with something, we’ll fix it at no charge.

All our QA experts are in the U.S. so that your project doesn’t suffer the time zone, language and cultural challenges that hamper many projects. Almost half of offshore clients stated they are unsatisfied with what they are getting versus their expectations. With QualityLogic, that does not occur. When you partner with us, you’ll avoid the costly mistakes that hinder your success. Our goal is to help you achieve the highest quality software that will delight your customers. To that end, we don’t leave until you tell us the job is done. We can provide you with a short-term engagement that runs from several weeks to months. Or, we can support your long-term QA software testing efforts through multi-year engagements designed to fit your needs.

A Structured QA Software Testing Approach That Gets Results

All our projects follow a very structured approach that ensures everything is progressing as expected within the time and budgetary requirements you specify. There are no surprises when you work with us. Your project doesn’t start until you approve the project proposal. Once the project begins, we’ll both have a clear view of expectations. As such, we assemble a team that has the specific skills needed to test your application thoroughly. That way, we can ramp up quickly and hit the ground running.

Throughout the project, we will keep you informed through regular updates and check-ins. These check-ins have much more significance than tracking the timeline and budget. They give us the insight we need to alter course if necessary. We identify any problem areas and adjust before things getting off course.

When you work with QualityLogic you can rest assured that you are partnering with a company that has proven experience across a variety of industries, methodologies and tools. We know QA better than anybody and we are ready to help you succeed.

Posted on

Ready for Title 24? Building EMS Vendors are in for a Surprise.

What is Title 24?

Title 24 contains the building standard codes for all residential and commercial buildings in California. The codes are a set of broad requirements for energy conservation and green design applied to structural, mechanical, electrical, and plumbing systems. Every three years an updated set of standards is published to incorporate the latest technology and methods that increase energy efficiency. The most recent set was published in 2019 and contains important updates that take effect Jan 1, 2020.

This set of standards is important as it begins to put greater emphasis on grid management and interoperability by specifying how buildings should communicate with the electric grid. The energy interface to the electric grid is now an essential part of the overall building design and efficiency. Title 24 contains its own nuances as to how smart grid resources and buildings that can manage power consumption or put power back into the grid need to operate.

In this blog we will look at the communications requirements for buildings for both residential and commercial applications in Title 24. For reference, all citations regarding residential requirements in this blog are from the 2019 Residential Compliance Manual portion of Title 24, and all citations regarding the commercial requirements are from the 2019 Nonresidential Compliance Manual. The general report, both manuals, and more information on Title 24 can be found in full here.

Managing the Smart Energy Grid

To best manage the demand for energy in California, buildings are broken into commercial and residential categories. When you think about it, the needs of family homes and apartments are quite different than large office spaces. The separation of requirements for the two building categories allows for each to contribute differently to managing grid power requirements and stability.

Commercial spaces have larger daytime lighting and heating demands with occupancy during the day. Residential spaces have lower daytime heating and lighting demands and are occupied during the morning and night.

Title 24 Residential Communications Requirements

One of the most important updates on the residential side of Title 24 is the implementation of the senate assembly bill requiring all new homes built after Jan 1, 2020 to have a certain amount of installed solar capacity (Chapter 7, Section 7-7). This is calculated with an equation dependent on factors like a residence’s climate and how much sun they are exposed to. Title 24 defers to CA Rule 21 when it comes to smart inverter communications.

Title 24 indicates that “battery system[s] shall have the capacity to discharge electricity into the grid upon receipt of a demand response signal from the local utility or a third-party aggregator.” It specifies in Appendix H that “DR controls must have the ability of communicating with the entity that initiates a DR signal by way of an OpenADR certified Virtual End Node (VEN).” (Appendix H, H-2) Beyond conformance to OpenADR for DR events, an Energy Storage System (ESS) must also “comply with all applicable requirements specified in Rule 21.” (Chapter 7, 7-16) With focus on PV and ESS generation at the residential level, California is setting clear expectations that in order to most effectively manage energy consumption, no building is too small to contribute.

Title 24 Commercial Requirements

The commercial building requirements in Title 24 set out a solar-ready provisional rooftop area for all new commercial developments that could eventually support a solar energy generation requirement. There is no current mandate to have a photovoltaic (PV) system on commercial buildings. As such, the commercial requirements are not specific regarding the protocols necessary for PV grid communication. If commercial buildings do use PV, and do not intend to sell energy back to the grid, there would not be communication protocol requirements. But if the commercial building PV installation would be connected to the grid and capable of inserting power into the grid, then clear guidelines for communication protocol requirements already exist in CA Rule 21.

Energy Storage Systems (ESS) are treated similarly in the commercial requirements. There is no explicit section on ESS for commercial buildings, and therefore no communication requirements are explicitly mentioned. However, CA Rule 21 does specify how communications will happen if a commercial building has storage that will discharge energy into the grid.

The most important communication requirement for commercial buildings is for DR events. The commercial requirements use Appendix D to specify that DR communications must be done with an OpenADR A or B profile. The two basic requirements for DR controls include:

  • Conformance to the OpenADR communication protocol to interpret and act on event and price signals
  • Implementation of load shedding hardware that can support signals over WiFi, Zigbee, BACnet, Ethernet, or hard wire

The Appendix clarifies that the VEN can be physically part of the Building Control System or can be cloud based and capable of communicating with the Building Control System. Subsections include considerations for thermostats, HVAC, and lighting. The requirement for large commercial buildings to have DR Control Systems that are reactive to OpenADR price and other event signals makes clear the value California places on coordinating large-scale usage of energy in commercial buildings with grid operations.

Creating an Interoperable Ecosystem

California’s goal is to create an interoperable grid ecosystem. Without effective ways to communicate information between Utilities and smart grid resources or the systems that control them, managing complex, dynamic energy landscapes are expensive if not impossible. It is a major step forward to see two mandates in CA requiring the use of industry communications standards. California wants to ensure that communication protocols to residential and commercial buildings are standardized so that the costs of implementing communications to manage customer DER assets are minimal. DR events for smart grid resources and DERs for both residential and commercial buildings are standardized using the OpenADR 2.0 protocol. This consistency is good. While the residential communications requirements are more focused on DERS like PV and energy storage, the commercial requirements focus on assuring standardized communications with EMS.

With the changing composition of the grid toward DER resources, it is imperative that load shedding and generating resources be controllable via communication protocols. Therefore, communication interoperability must be an important focus for utilities and vendors alike. Since California is a major market for any vendor of energy consuming or generating devices and systems – e.g., it is 50% of the solar PV market in the US – vendors all over the world are adopting the communications protocols required in California. This in turn creates a growing global eco-system of vendors and implementers that supply and implement the standardized communications specified in CA. This makes it much easier for utilities elsewhere to piggyback on the pioneering work in California. Title 24 sets the stage for the adoption of communication standards for residential and commercial buildings on a global scale.

What Does This Mean for Vendors?

We are being asked today by our vendor customers to help them understand the impact of Title 24 on their business as it relates to implementing standard communications protocols.  The answers are unfortunately not as clear as we would like to see. From an OpenADR perspective, vendors are required to do the following:


  • ESS must be able to accept a DR signal by way of an OpenADR certified VEN (Chapter 7, 7-14; Appendix H, H-2)
  • DR controls installed at the circuit level for HVAC equipment must meet complete requirements for DR thermostatic controls (Appendix H, H-4)
  • Energy Management Control Systems must comply with required thermostatic and lighting control functions in Title 24 (Appendix H, H-4)


  • Buildings without DDC to the zone level require single zone air conditioners and heat pumps to be DR thermostats, or OCSTs.
  • Buildings larger than 10,000 sq. ft. must be equipped with DR controls for indoor lighting systems
  • Electronic Message Centers that have a lighting load greater than 15kW must have demand responsive controls
  • IF DR controls are installed as part of the power distribution system, the controls must meet DR requirements
  • EMCS installed to perform lighting and control functions must meet DR requirements to be complaint with Title 24

For vendors of any DER (Solar PV, generators, ESS and potential EV charging) where energy will be sold back or otherwise injected into to the utility grid, Title 24 also mandates conformance to CA Rule 21. 

If changes to Title 24 affect your company’s need to conform to the OpenADR protocol, or even the IEEE 2030.5 protocol, then get in touch with our team and we can walk you through the requirements. Our team has trained hundreds of developers through our training classes and has created dedicated, OpenADR and SunSpec Alliance approved Test Tools for development and pre-certification efforts.

Posted on

Learn from the Functional Testing Services Pros at QualityLogic: A Helpful Guide for What it Takes to Perform Thorough Functional Testing

Functional testing services represent a form of black box testing that validates that software works as expected. Specifically, it is a form of testing undertaken from a user perspective that evaluates each feature against a set of acceptance criteria. The results are categorized as either “passed” or “failed” based on that criteria. Developers modify the code to fix the failed feature. After the developer implements their code changes, the test technician reruns the test to ensure nothing else was affected by the changes made to fix the initial issue. The cycle continues until the product meets predefined acceptance criteria.  

There are many forms of validation that fall under the umbrella of functional testing services. Each of which validates a specific area of the system. Teams typically employ most of the common types which include usability, interface, and regression testing. Keep reading for details on what it takes to perform thorough functional testing for your software systems.   

What is Functional Testing?

 Functional Testing Services Defined

Functional testing services refer to the work required to validate that software works as expected.   

A test begins when a test technician inputs data into the system to evaluate a specific function. The test technician assesses the results to determine if it provides the expected outcome based on a set of acceptance criteria. The test technician then determines if the feature passes or fails based on that criteria and documents the defective feature test condition to communicate and reproduce the observed failure. The developers then modify the code to fix the defect that caused the failure. Functional testing continues in a loop of test, fix, retest until the product meets a standard that is acceptable to the team. At the end of functional testing, the project team decides to either deploy to production or continue adding new features and fixing defects, starting the functional test cycle again.    

 It is important to note that functional testing is a form of black box testing in that the test technicians have no knowledge of the internal workings of the system. Thus, the focus of black box testing is on validating the system from the perspective of a user. As such, it helps to ensure that the QA team’s evaluation of a feature is based on how the system behaves under real-life scenarios.  

The Value of Functional Testing

Functional testing services help companies gain a competitive edge. Today’s competitive marketplace is flooded with companies offering such similar products that it is easy to get lost in the shuffle. When that happens, companies miss opportunities to land new customers, increase their market share and expand their brand. So, what can companies do to set themselves apart? Many are leveraging technology as their competitive edge and brand differentiator. With that said, a company’s technology must be impressive, intuitive, reliable, and provide such an outstanding experience that customers can’t help coming back for more. More simply stated, the customer experience must be flawless.   

According to a study sponsored by IBM, data breaches cost companies an average of nearly $3.9 million dollars each year. Often, these breaches stem from minor glitches that could have been avoided with proper testing. Functional testing services performed by the QA team evaluates every area of the software under a variety of scenarios and data input. The goal is to have enough test coverage to catch not only the major issues but the small things that might get overlooked.  

Many companies are bound by compliance and regulatory requirements such as privacy, information storage, and information reporting. When a company’s software doesn’t work correctly, they run the risk of violating their regulatory obligations. The resulting fines and legal action could wreak havoc on a company’s finances not to mention their reputation. Functional testing services help companies validate that their systems are compliant and help them avoid the consequences of not doing so.   

The Basics of a Functional Testing Strategy

There are two basic approaches to a functional testing service strategy. Requirements-focused software testing determines the priority and order of testing efforts based on a prioritization of the user requirements as defined in the functional specification. Thus, the requirements document forms the basis for the acceptance criteria. Business-process-focused testing is a form of role-based testing that determines the validation priority based on how the system is used in the context of use cases. Given that, business-process-focused testing relies on the QA team working alongside the Subject Matter Experts (SMEs) to determine the acceptance criteria.   

Each approach has pros and cons that must be weighed against the context of the nature and complexity of the system. Requirements-focused testing works well for complex systems where there is a well-documented and approved requirements specification. Due to the complexity of the system involved, having a requirements specification as the basis for testing efforts helps ensure the proper coverage to meet those requirements. Business-process-focused evaluation is ideal for systems heavy in process or use cases. Validating process-heavy systems often require a level of domain knowledge that can only be achieved by working alongside the SMEs. 

Functional testing services revolve around a six-step process. The initial step is to identify the functions under test. As mentioned, those functions are determined by either the functional specification or the SMEs. Given the list of features, the team must then define the data input required for each feature. The team must also determine the expected output from those tests. Once input and output have been determined, the QA team executes those tests either manually or in some cases using automated tools. After each test, the test technician compares the results to the expected output and makes a pass/fail determination.   

Types of Functional Testing

Functional testing  is a term that encompasses several types of tests, each with a specific function.   

No single test method is enough to fully test the system. It is important for project teams to use multiple methods throughout the testing cycle. Many of these methods are repeated as often as necessary until the system meets the required standards.    

User Acceptance Testing  

Having users evaluate the system is an important step in validating that it works as expected. In user acceptance testing, end users can experience and try the system under real-world conditions. Sometimes, the users may catch things that could have only been found based on differences in how they use the system. Thus, allowing the end users to evaluate the system helps to ensure that all possible scenarios for input and output are covered.  

Unit Testing  

Unit testing is a specific functional testing service that focuses on evaluating the code to determine if it functions properly. This type of validation evaluates the smallest piece of code that can be isolated and assessed individually. That unit of code could be a method, a line of code in a method or a property. The benefit of unit tests is that by isolating components of the code, there is less risk of code coupling. A good unit test should have no dependencies on outside factors such as a database or file system. They should be repeatable in that it should return the same results under the same conditions with each execution. Lastly, they should be self-checking by automatically determining pass/fail status without human interaction.   

Integration Testing  

While unit testing focuses on testing components in isolation, integration testing validates that those components work well when tested in groups.   

Interface Testing  

Software systems often interface with other software systems to provide user functionality. This connection is called an interface. QA teams must validate that the links to other systems operate as expected. An example of an interface is a web service API call to a third-party API to retrieve pricing information. Test technicians can validate this functionality by inputting the required information, performing the function that makes the API call, then validating the results.   

Regression Testing  

Regression testing is a form of software testing that ensures the system still works properly after code changes. Test technicians rerun either a subset of the test suite or the full test suite. There are typically two types of regression tests, smoke testing and sanity testing as discussed below.   

Smoke Testing  

The QA team requires some level of measurement to determine if the system works well enough to proceed with further validation efforts. It is a non-comprehensive and high-level type of evaluation that covers the major components of the system. The idea isn’t to determine if the system works perfectly. The goal, however, is to determine if the team can proceed with further validation efforts despite the existing defects.   

Sanity Testing  

Like smoke testing, sanity testing is also a non-comprehensive and high-level evaluation of the software. However, it is performed after code changes. The goal of sanity testing is to verify if the system continues to work as expected given the changes introduced into the system.  

Functional Testing Services Best Practices

Adhering to a set of standard procedures gives the team a clear roadmap for the test cycle. This roadmap should help the team navigate priority, traceability and communication. Not only that, these functional test services best practices help avoid some of the common traps that hinder the process.  


An effective strategy is one that covers as many scenarios as possible. However, determining where to start is a decision that can mean the difference between a successful cycle and a frustrating turn of events that seems to lead nowhere. Before testing begins, it is crucial to prioritize the functions in question. That way, the team can start their validation on items with the highest significance in terms of user importance, risk and cost-benefit. Beginning with the high priority items allows the team to address them earlier in the evaluation cycle rather than rushing at the end to “fit it in” before deployment.  


Before testing efforts begin, it is important to develop a matrix that maps the relationship between requirements to test cases. With a traceability matrix, the team can quickly identify that all requirements have been addressed. It is also essential for the team to manage traceability through requirements changes as these changes could impact the outcome of existing tests. Requirements changes could also require new test cases that must be developed. Traceability is ultimately critical to ensuring proper system coverage.   


A formal feedback loop should be established at the onset of testing. The team should document expectations for how new defects are to be reported. Additionally, the team should document how the developers will communicate releases to the test technicians for re-validation. Lastly, there must be a formal process for closing defects that ensures all parties agree on the item in question.   

Validation of Test Cases 

As the size and complexity of the system increases, so does the risk for redundant scenarios. Given that, the project team should have a clear process for evaluating each new case, to ensure it is not a duplicate. System complexity also increases the risk for logic errors. The test technician should work closely with the requirements analysts and the SMEs throughout the process so that logic errors can be spotted by those with enough domain knowledge to do so.  

It is important for project teams to employ a variety of functional testing services to ensure proper coverage. By doing so, they minimize the risk of bugs that cost the company customers, money and damage their reputation.   

QualityLogic’s team of experts have over 30 years of experience providing functional testing services in a variety of development methodologies. We use industry proven tools and techniques to give each feature the level of examination required to ensure a positive user experience across the system. Click here to learn how QualityLogic’s functional testing services can help detect defects early so you can deliver a product that far exceeds your customer’s expectations.  

Posted on

Website QA Testing: Effective Tests and What They’re Made Of

Website QA testing is no longer an optional task delegated to the end of the project life-cycle. The amount of time and effort spent on site validation impacts the user experience, which ultimately affects the company’s bottom line. That is why testing early and often is so important. The most effective QA strategy incorporates validation points at regular intervals throughout the project. Additionally, said strategy requires validation at all layers of the site to make the best user experience possible. Let’s explore more about what makes for effective QA test cases and how to develop a strategy for your project. 

What is Website QA Testing?

Website QA testing is a necessity if a company is to remain competitive in a crowded industry. In effect, website testing is the process of validating a website’s functionality, performance and user experience. The goal is to identify and fix bugs before they are released to the customer. Manual QA analysts have historically performed website validation. However, the recent shift to Agile has brought QA from a siloed effort to one that involves the developers, business analysts, QA and just about every member of the project team.

The old saying “first impressions are everything” stands true when it comes to a website. Visitors base their first impression of the site on how well the site looks and how intuitive it is for them to accomplish their goal. In today’s competitive landscape, companies have but a mere few seconds to grab the visitor’s attention and give them a reason to stay. If a customer does move beyond the first page, their experience on the remainder of the site must be exceptional. They won’t tolerate the little “quirks,” bugs or slow page loads that prevent them from getting what they need. Anything that slows them down or prevents them from getting to their goal is ultimately a very bad thing. Thus, the most effective QA test strategy is one that can root out and resolve issues before the visitor even hits the landing page.  

At the highest level of website QA testing, functional tests verify that the site works as expected. The focus with this form of validation is to determine if the visitor can perform all expected tasks without errors. As such, this level of validation focuses heavily on the user experience. If a customer can’t find what they need quickly, they aren’t likely to stick around for very long. 

The next important factor to consider is performance. The site must accommodate the expected number of visitors without sacrificing page response times. Performance assessments help ensure that the site operates smoothly regardless of the number of simultaneous users.  

The Value of Website Testing

Many companies don’t realize how website QA testing can affect their bottom line, but one bad review from a poor user experience can wreak havoc on a company’s reputation. The best way to avoid this is to ensure the site performs as expected and delivers what it promises. QA testing can help root out these problems before the customer ever sees them. 

Website QA Testing Catches Errors

Fixing a bug after deployment can be costly and leaves risk of the issue potentially costing the company millions. Imagine what would happen if a seemingly small bug ended up exposing customer data. The financial repercussions could be devastating as the company battles lawsuits and regulatory fines for such a mistake.   

Post-deployment bugs also take developers away from their other duties. The more time they spend on bug fixes, the less time they have available to devote to developing new features. Depending on the quantity and severity of the issues, developers could get so bogged down in code fixes that other items on the project plan take a back seat. The same can be said for test technicians. When QA team members are forced to retest the same issues, they have less time available for new features or validating other items in the test suite. All the while, the user experience suffers. Not only that, any competitive edge the company has gained starts to suffer as the entire team works to fix existing code rather than keeping up with customer demand.  

Types of Website QA Testing

When developing a website QA testing strategy, it is important to consider all aspects of the site. From the front-end to the back-end, each layer is a piece of the puzzle that works together to make for a positive user experience. Testing at each layer of the application helps spot weaknesses that could negatively affect other areas of the site. 

Types of Website QA Testing

Functional Tests

Functional tests verify that the functions of the site work as expected. With this form of validation, the goal is to simulate various scenarios by inputting data and verifying the results provide the expected outcome. Effective QA tests also include some form of user experience testing. It is often helpful to have a QA team member who is skilled in UX to provide input on the user experience.  

Regression Tests

Any change has the potential to affect every area of the site. Even the smallest most innocuous change could bring down an entire website. Thus, regression testing helps ensure that the site still functions after every update and code fixes haven’t introduced new problems into the system.

Accessibility Testing

Website QA testing must involve accessibility testing to make sure that those with disabilities can access the site. Types of items that must be addressed with this level of testing includes validating that screen readers can process the site, users can navigate the site with just a keyboard, and that all pictures and videos have proper descriptions. 

Integration Testing

An effective QA test strategy acknowledges that websites do not exist in a vacuum. There are often components outside of the application that provides additional services or features. A common type of integration point is an API call to a third-party service. Testing these components helps to ensure that any service provided by that integration works as expected. 

Performance and Load Testing

Great content serves little purpose if the visitors can’t access the information due to slow page loads. That said, load testing examines the overall system’s abilities to service user activity for large numbers of users up to, and exceeding, usage expectations. Performance testing is aimed at optimizing the operation of the different aspects of the system so that, when they are all linked together, they offer the best user experience possible.

Security Tests

The purpose of security assessments is to ensure the site is protected against unauthorized access. This type of validation prevents exposure to fines and compliance risks associated with data breaches. 

Compatibility Testing

Compatibility testing ensures the site performs as expected on all major web browsers. Additionally, this form of testing validates the sites performance on a variety of operating systems as well as various network and hardware configurations.  

Companies can no longer assume that visitors access the site via their desktop. The emergence of mobile technology means that companies must develop a website QA testing strategy that ensures customers can get the information they need on the go. That said, compatibility assessments help ensure the site is mobile friendly and that it works properly on a variety of devices. 

The Makings of An Effective Website QA Test

The most effective website QA testing strategy recognizes that it is the breadth of the test suite that makes for a successful QA effort. That said, an effective test is one that is part of an overall strategy that covers the entire stack of the site from back-end to the UI. Each case plays an important role in validating the functionality and performance of the site. 

Whether it is a back-end test case or a functional case, each must cover just one specific scenario. Each should have only one clearly-defined expected outcome. Next, each case should have a strong title and an in-depth description. A good title and description help prevent duplication in the suite. Each case must also have properly defined assumptions and preconditions. Lastly, the only way to get a real sense of how the site works is to use real data. “Dummy” data suffices for validating the application in lower level environments. The problem with using this data is that there is a tendency for the reviewers to “massage” it to fit the scenario. Using real data ensures reviewers are using the site in the way a real visitor would.

Automating Website QA Testing Efforts

While there is an upfront investment in writing the scripts, the payback comes in the level of efficiency achieved during testing. Automated testing helps the QA team by freeing them up to perform exploratory testing and take other validation measures to increase test coverage.   

Why Website QA Needs to Be an Ongoing Effort

Customer demand must drive any successful website QA testing strategy. The website must be in a continual cycle of build, test and release if it is to remain relevant to the customer. Not only that, testing regularly and often makes for a healthier application which is ultimately easier to maintain. It is also easier to scale a fully-functioning and error-free website. When the application works as expected, there is less of a chance of one of those bugs hindering performance as the site grows.  

Website QA testing is instrumental in helping companies remain relevant and valuable to customers. An error-laden website is the fastest way to send customers on the hunt for a company that can deliver a better experience.  

QualityLogic’s website QA testing processes help accelerate your testing efforts so that your team can focus on developing a website that makes a great impression on the first visit and every visit thereafter. Click here to learn how to leverage our software testing services in your website QA efforts.  

Posted on

Why Test Automation Initiatives Fail: Advice from a 30-year-old Software Testing Company

Anyone who has been in the software testing field for more than a few years has seen a test automation effort fail. The automation effort starts out with great optimism that the selected automation tool will work wonders and in short order all the tests will run with the push of a button. However, the tools don’t quite work as advertised, the automation is a bit trickier than expected, and everything about automation takes longer than estimated.

A (Not So) Hypothetical Test Automation Failure

Automated tests are rolled into the release process once complete. These automated tests start generating false failures due to changes in the GUI. This causes some test automation team members to get diverted to help fight these false failure issues, slowing progress down further. Next, a few test automation team members quit and maintaining their code becomes quite difficult. Management starts to get disillusioned at the slow progress. Developers get tired of the false failures and want additional proof before they will fix bugs caught with automation. The whole thing spirals downward from here. At some point, perhaps because of a budget crunch or the loss of an automation champion in management, the plug is pulled on the automation effort.

Although this sad scenario in not untypical, it doesn’t have to happen to you. QualityLogic has 30 years of experience helping customers successfully test their products and we would like to share with you some of our automation testing wisdom.

Why Companies Pursue Test Automation

There are many good reasons why companies want to start automated testing. There are potentially large costs saving over manual testing. Release cycles are getting shorter, so testing must be accelerated. Perhaps the most compelling reason is that test automation improves product quality by catching bugs earlier in the process. A set of manual tests may take a week to run, the same tests automated could be run once or twice a day.

What Tests are Typically Automated?

There are two primary types of tests that are typically automated. Unit tests that focus on individual source code methods and functional tests that ensure all aspects of the software program are working correctly. The task of unit test automation is usually the domain of the developer, while functional test automation typically rests with the software test team.


Types of Test Automation Tools and Their Effectiveness

There are a wide variety of test automation tools with varying degrees of effectiveness and requiring differing skills. Automation tools can be roughly categorized in to the following general areas:

  • Dumb Record & Playback Test Tools – A brute force recording of user interactions with the application. Test scripts are very fragile and break with the slightest change in the application.
  • Smart Record & Playback Test Tools – More adaptive recording of user interactions with the application, storing multiple object identifiers and leveraging machine learning. Able to adapt in a limited fashion to application changes without breaking the test script.
  • AI Assisted Auto Discovery & Playback – Self-discovery of paths through the application using reinforcement learning, with the ability to playback any of the discovered paths. Able to adapt in a limited fashion to application changes without breaking the test script.
  • Abstract Syntax Test Tools – Use of natural language, keywords, or procedural text (think Cucumber/Gherkin) to define test cases, with the underlying automation code driven by the abstract test definitions. In some tools the automation code triggered by the abstract test definitions must be hand coded and in other tools some helper routines deal with more common scenarios that can be inferred from the application objects.
  • Hand Coded Test Development – Use of common programming languages to define automated test cases using an underlying automation API such as those supported by Selenium, Appium, or mobile device operating systems.

While there is a lot of excitement around the AI enabled testing tools, particularly those that can auto-generate test scripts, these tools work best with applications whose application logic is relatively simple. In our experience, applications with more complex application logic require hand coding of automated test cases to fully test the application logic.

14 Test Automation Best Practice Guidelines from the Pros

Test Automation best practices

The best practices recommendations below focus on the development of hand coded automated functional tests used for new feature validation and regression testing.  Functional test automation of applications containing complex application logic is typical of most automation projects that QualityLogic has done over the years.

Ensure Management Commitment

Test automation can generate a huge ROI, but it takes time. Management must be committed to the effort and their expectations must be carefully calibrated. Automation is not a closed ended project, but rather a more fundamental change in how the testing aspect of software development is done.

Staff Skills Needed for Test Automation

Programming skills are needed for most test automation efforts. Senior test developers can code reference tests for various test classes, then more junior staff can use those reference tests as a guide for derivative tests. Most organizations select a specific language for test case development and it is prudent to have potential test developers demonstrate their skills in the selected language before being added to the team.  

Industry-wide, Java and JavaScript are the most popular for test automation projects, however QualityLogic’s customers have been more frequently using C# or Python. In theory, Python is a friendlier language for more junior developers which may be a consideration when selecting a language.

Staff Experience   

Having staff on the test automation team that have been there, done that is huge. Many companies leverage outside software testing companies, like QualityLogic, to provide their internal teams with the necessary expertise and experience with test automation.

Domain Knowledge for Test Automation Efforts

Developers can be more effective if they have hands on knowledge of the product they are automating and have a sound understanding of the domain within which the application is used. This knowledge is typically picked up on the fly during test automation development. Without this familiarity, test developers may do what you ask but not what you want, as they won’t “see” problems that are obvious to anyone familiar with the application or domain.

Staffing Your Test Automation Effort

The automation effort must be adequately staffed both for the active test automation development phase and for ongoing test maintenance. There should be enough resource redundancy such that the program doesn’t fail if a test developer leaves the team.

Deciding What Test Cases to Automate

Priorities for test automation should go to tests you want to run every build, tests that need to be run across multiple platforms, and tests that are time consuming when doing manual testing. A good place to start for many organizations is automating your build release smoke test. Other criteria could include how easy the tests are to automate (get a quick success), avoid automating test cases with unpredictable results, and automate test cases with the most frequently used functionality.

Test Automation Tool Selection

The first order selection criteria are whether to go with a commercial tool offering or open source tools. Commercial test automation tools from big players like Tricentis are robust, but very expensive. Commercial tools test tend to be relatively easy to use, simplify the test creation process, have training and support for their products, and tend to be less buggy than their Open Source counterparts. 

Open Source automation tools are free, have supportive user communicates, and some like Selenium have become de facto standard test tools. Multiple Open Source tools may need to be used in concert for a given automation solution and integration can be challenging. Most of QualityLogic’s test automation engagements use open source tools. Customer motivations are varied, but in general they do not like the idea of being locked into a single vendor for a critical part of their development infrastructure.

Test Design

It is important to start with a good manual test case, then automate it where possible. Clear guidance on test intent, preconditions, user actions, and expected results are critical. Using an abstract syntax like Gherkin can help provide clarity as to test intent but does add another layer of abstraction to the test execution process.

Development Priorities

Showing automation results quickly is a huge confidence builder for both the automation team and management. At QualityLogic we use AI-based similarity analysis of manual test cases to identify opportunities to build libraries for common user interactions and to identify the order in which to approach coding of test cases to maximize development progress. The techniques can be grouped into the following approaches:

  • Isolation of globally common code test sequences whose functionality can be automated as part of a common code library.
  • Clustering similar test cases for assignment to the same programmer.
  • Predictive ordering of test cases for development to maximize code sharing between similar  test cases

Coding Best Practices for Test Automation

Organizations should develop coding best practices for test automation including templates that guide the test developers’ efforts. Key elements of an effective set of best practices include:

  • Test case naming
  • Object location strategies (use more than one)
  • Hard or soft asserts
  • Wait handling
  • Page object pattern usage
  • Data driven test inputs
  • Minimal dependencies with other test cases

Automated Test Scope

Automated Tests should focus on a clear and narrow objective, typically replicating a specific typical user action.  Tests with a narrower scope are easier to code, easier to debug, and easier to maintain.

Source Code Control

Test automation is software development. Robust source code management is a must using platforms like GitHub.

Dealing with False Failures

Living with false failures is a fatal mistake for automation efforts. If a test is flakey, pull it out of the daily automation runs until it is fixed. If development is not going to fix certain bugs, pull the test cases that trigger those errors from daily runs. You must preserve an environment where an automation run failure is a red flag to everyone on the team. False failures or failures that are ignored poison that assumption.

Test Automation Framework

A test automation framework should be put in place. This is more than just the selected automation tool. It includes common resources and libraries that test cases leverage, integration with build processes, issue trackers, and other parts of the software development infrastructure. This allows the test developer to focus on coding test cases knowing the framework will take care of the execution and reporting details.

Partnering with QualityLogic for Your Test Automation Effort

Partnering with a software testing company like QualityLogic, with decades of experience automating software testing, can help ensure a successful outcome for your automation efforts. Whether we take on the whole automation effort or integrate into your existing test development teams, we are confident that our skilled and experienced test automation engineers can have a dramatic impact on your automation efforts.

Posted on

IEEE 2030.5 Takes Off: The Latest News on the IEEE 2030.5 Standard

It is wonderful to look back at the blog we wrote in November of 2017 and see the amazing progress the IEEE 2030.5 community has made in the past 2+ years. Since then:

  • We’ve gone from waiting for CA Rule 21 to have a “required by” date to having such a date (June 22, 2020) and driving the vendor community to invest in IEEE 2030.5 development and certification.
  • We’ve gone from a concept of a test and certification program for IEEE 2030.5 for DER to having a complete program organized and managed by the SunSpec Alliance (see This is a huge leap forward.
  • We’ve gone from beating the bushes for companies and researchers working on IEEE 2030.5 to working with the 8 SunSpec approved test labs and dozens of vendors implementing IEEE 2030.5.
  • There is now an open source implementation of an IEEE 2030.5 Client available free on EPRI’s Github site.
  • We’ve seen utilities outside of CA in the US, Canada and Australia requiring IEEE 2030.5 for both DR and DER applications.

Two years ago, QualityLogic organized the “2nd IEEE Symposium and Expo and a public workshop” in Southern CA. We had about 80 attendees for the two-day event. SunSpec has now conducted at least 4 public executive workshops in the past year and created an online education course through UCSD.  QualityLogic is a sponsor of both activities and participates in them.

This blog aims to dive into the progress of the IEEE 2030.5 standard on the following topics:

  1. The complexity of implementing IEEE 2030.5 and how QualityLogic is helping to simplify the process
  2. How a strong alliance of utilities and vendors are building an IEEE 2030.5 eco-system
  3. The importance of IEEE 1547.1 in managing DER assets
  4. IEEE 1547.1 certification and the “end-end” testing challenge
  5. What is next for IEEE 2030.5

Implementing IEEE 2030.5 is Not Simple

IEEE 2030.5 is a very rich, modern IOT protocol. It has over 30 distinct “function sets” covering everything from device discovery to security to smart grid functions like DER, demand response and flow reservation for EVs.  

Even implementing an application as narrowly defined as the Common Smart Inverter Profile (CSIP) is more complex than you would imagine. All that is needed is to implement the DER specific function set in IEEE 2030.5. Correct?

Not so. The structure of IEEE 2030.5 requires implementation of almost 20 of the 30+ function sets to be compliant with the CSIP requirements. These include everything from discovering device to device capability exchanges to both polling and pub/sub communications and much more. This is the reason that QualityLogic has been so busy training both the development teams and test labs.  We’ve even given our two-day workshop to utilities wanting to better understand what they should be asking vendors for.

Building a Standards Eco-System is Not Easy

The progress made with the IEEE 2030.5 standard is rather amazing, especially given the lack of a well-organized and funded alliance for the standard. Where is the equivalent of the Wi-Fi or Zigbee or OpenADR or the MultiSpeak Alliance for IEEE 2030.5? There isn’t one and that is a long-term issue.

Fortunately, the SunSpec Alliance took on the task of creating and managing a certification program for the application of IEEE 2030.5 to DER management. And the IEEE 2030.5 work group continues to evolve the standard. Just recently, the work group started work on the next revision having just finished the 2018 version. A group of vendors led by Hydro-Ottowa are engaged in using IEEE 2030.5 for the Great-DR project. And two utilities in Australia are working on implementing their own version of IEEE 2030.5 DER management.

We tried in 2016 to create an alliance around IEEE 2030.5 but at that time there was not quite enough interest. NOW THERE IS and a group convened by the DoE and IEEE is assessing what is most needed for the future of the standard. The answer I see emerging is a strong alliance of utilities and vendors with investment in the IEEE 2030.5 standard.   

How that will evolve is TBD. But if you are interested in making it happen, please contact us so we can get you involved when we have a clear direction.

Smart grid - IEEE 2030.5 and CA Rule 21

What is IEEE 1547.1 and Who Cares?

IEEE 1547 is the industry standard of IEEE for managing the interconnection of any generating resource to a distribution grid. The standard has just been updated in 2018 to incorporate smart inverter functions. Prior to this, the standard was very simple and basically required the inverter for the generation source to disconnect from the grid under any anomalous circumstances. If the local grid suffers a voltage sag or high-voltage “blip”, the inverter was required to stop putting energy into the grid. The goal was to ensure that distributed energy resources (DER) did not damage the grid under stress conditions.

The IEEE 1547-2018 version ensures that DER resources can be used to manage grid reliability. For instance, instead of disconnecting from the grid when the voltage sags, smart inverters could be asked to provide voltage support to the local grid.  

IEEE 1547.1 is the companion standard to IEEE 1547 that specifies how to conduct the tests that certify IEEE 1547 compliance. IEEE 1547.1 is primarily a functionality test for advanced smart inverter functions.   But IEEE 1547-2018 also adds for the first time a required communications capability using one of three designated protocols:

  • SunSpec Modbus
  • DNP3
  • IEEE 2030.5

And IEEE 1547.1 specifies how to test for the communications capabilities. This is a huge step forward in that for the first time a smart inverter hardware certification program will validate that sending instructions or information in an industry standard protocol will achieve the desired behavior of the system. The significance of this for managing DER assets cannot be over-emphasized. It is a great starting point for gaining confidence in the interoperability AND performance of a smart inverter in a single certification program.

And IEEE 1547.1 specifies how to test for the communications capabilities. This is a huge step forward in that for the first time a smart inverter hardware certification program will validate that sending instructions or information in an industry standard protocol will achieve the desired behavior of the system. The significance of this for managing DER assets cannot be over-emphasized. It is a great starting point for gaining confidence in the interoperability AND performance of a smart inverter in a single certification program.

The End-End Validation Challenge and IEEE 1547

One of the major issues in CA Rule 21 today is how to ensure that instructions sent from a utility DERMS to an aggregator or building EMS are turned into the intended behaviors of the targeted smart inverters. Some stakeholders seem to believe that full end-to-end testing will be conducted with IEEE 1547.1.    

Unfortunately, IEEE 1547.1 certification testing will not accomplish the objective of a full end-to-end test. IEEE 1547.1 actually does the following for interoperability:

  • It ensures that one of three standard protocols (DNP3, SunSpec Modbus or IEEE 2030.5) can be used to communicate the inverter function settings defined in IEEE 1547-2018 to a local inverter control system or gateway device. 
  • Validates that the communicated functional instructions are correctly implemented in the smart inverter.

The use of the term “gateway” in IEEE 1547-2018 means a local communications capability. And it must be in physical proximity to the actual inverter system (rather than cloud-based). The purpose of defining a gateway as a local interface is to reduce the risk of a communications failure due to the loss of the cloud or an internet connection.   

The hope that IEEE 1547.1 will address the end-end testing issue is misplaced.  What IEEE 1547.1 will not do is the following:

  • IEEE 1547.1 will not require that one of the standard protocols is used in the installation and operation of an inverter.  While IEEE 1547.1 ensures a “capability” to use one of the protocols, the requirement to actually use it will be a vendor, utility or policy specific decision.
  • It does not ensure that the local interface for IEEE 1547.1 testing will be IEEE 2030.5. That will be up to the inverter vendor. If they already have a SunSpec or DNP3 local interface, that may well be the protocol used for IEEE 1547.1 certification.
  • There is no “end-end” testing in IEEE 1547.1. The certification only validates that a correct message in one of the protocols from a simulated aggregator, utility, cloud-based adapter, EMS, etc., will result in the desired performance. There is no testing with a specific EMS, aggregator system, utility DERMS, or any other source that may be sending real instructions to the inverter.
  • The IEEE 1547.1 Interoperability test is not a protocol test. While it ensures that the IEEE 1547 functions can be managed via a specific protocol (including monitoring and scheduling), it does not validate that the rest of the protocol is functioning correctly.  That is what is done in a protocol test such as the SunSpec IEEE 2030.5 CSIP test.  This means that an inverter can pass a 1547.1 interoperability test but still not communicate correctly with a production server for that protocol.
  • IEEE 1547.1 does not include any security testing to validate certificates, authentication and security features and controls.  This is a critical aspect of an end-end test in our view.

The bottom line is that the IEEE 1547.1 certification will not solve the “end-end” testing challenge. 

What’s Next for IEEE 2030.5?

Progress in implementing IEEE 2030.5 is being rapidly made in California and other utilities are developing similar programs. DERMS, aggregator, building EMS, microgrid control systems, gateways and inverter controllers are all incorporating IEEE 2030.5 and a robust DER protocol test and certification program is finally in place and operating. There are 8 authorized test labs, a security infrastructure, commercial and open source code bases and a set of test and certification tools to assist the development of IEEE 2030.5 systems. 

As CA Rule 21 comes into effect and the industry observes that an IEEE 2030.5 communications infrastructure can fulfill its promise, other utilities and jurisdictions will follow.

What is needed most now is a robust industry alliance aimed at educating and implementing the IEEE 2030.5 technology globally.

Posted on

4 Test Automation Misconceptions Explained

Test automation is a software testing technique that uses an application to execute a select number of test automation scripts. Automated testing is performed automatically using test scripts or automation testing tools. It is an approach that has become the cornerstone of a fast-paced development cycle that helps drive a company’s competitive edge. Using test automation during the software development life-cycle provides quick feedback to the team after a new feature has been developed. It is a strategy that is especially beneficial for tests that don’t change frequently and are executed repeatedly on every new release.  

Despite the obvious benefits, many companies are still on the fence about adopting this approach. Their trepidation often stems from misconception bred from horror stories of failed attempts at automation and stories about a massive investment in a technology that didn’t provide a positive ROI. We want to address a few of those misconceptions to help you make a decision that is right for you.

Test Automation will Provide You with More Free Time

Does test automation create more free time?

Test automation isn’t so much about creating more free time, but instead being more efficient and productive with the time spent testing. By automating repetitive tests, you can reallocate that time to running exploratory tests to verify that your application is still in good order as changes are being delivered. The downside of automating those repetitive tests is that many organizations fall into a “set-it-and-forget-it” mentality. Automated test script maintenance is one of the most crucial aspects of test automation. As the development of your product changes, so must your automation code.

The triaging and maintenance of automated tests is where most organizations tend to trip up. They don’t build that time into their initial test automation strategy and the automation project dies as a result. This is where a 3rd party testing service can be of great use. QualityLogic is regularly utilized to review test failures as well as debug and update test automation code. We are also extremely adept at helping companies develop their test automation strategy and writing automated test scripts. Utilizing an outsourced software testing company to ensure the success of your automation project allows your company to focus on developing revenue-generating code for new features and critical updates. 

While test automation requires an initial upfront investment and time to set things up the payoff comes when your team is free to focus on more important areas that help you maintain your competitive edge.  

The Costs of Automated Testing is Too High

The initial sticker shock of test automation can dim any prospect of an actual ROI. The reality, however, is that failing to perform adequate testing costs more than the initial investment. A study by the National Institute of Standards and Technology (NSIT) found that software defects cost the economy nearly $60 billion annually. Think you’re exempt? Think again. Imagine, just one post-deployment issue could leave an entire function of your system out of commission costing you money and customers. With that said, test automation is about much more than speed to market. It’s about catching bugs pre-deployment when they are less expensive to fix.  

Aside from the cost benefits, there are several intangible benefits that more than justify the cost. Automating repetitive tests frees up time that can allow a team to more deeply reflect on the needs and wants of the customer. The World Quality report found that ensuring customer satisfaction is the primary expected outcome for software testing teams.

Satisfied customers form the foundation of any successful business as customer satisfaction leads to repeat purchases, brand loyalty, and positive word of mouth. A recent analysis of customer satisfaction concluded:

  • A totally satisfied customer contributes 2.6 times as much revenue to a company as a somewhat satisfied customer.
  • A totally satisfied customer contributes 17 times as much revenue as a somewhat dissatisfied customer.
  • A totally dissatisfied customer decreases revenue at a rate equal to 1.8 times what a totally satisfied customer contributes to a business.

Finding the right blend of manual and automated testing can ensure that your software meets the demands of your customers and allows your team to focus on bettering the product through new features and upgrades. 

Automated Testing is Better Than Manual Testing

Test automation versus manual testing 

The tendency to compare manual testing to test automation is a mistake that could leave you facing the same problem automation was supposed to avoid. The thing is, automation and manual testing go hand-in-hand, and neither is better than the other. Both strategies complement each other in ways that allow for the most comprehensive and effective test strategy. That said, you will need to employ both at some point in your development cycles. The key is knowing when and how to implement each.   

Situations Best Suited to Test Automation

As mentioned previously, continuous integration environments require a testing approach that can keep pace. Test automation makes it possible to repeat an entire test suite as often as needed. Not only that, automated scripts can simulate multiple concurrent users to evaluate the system’s performance.    

Data-driven functions are another prime area for automation. There may be cases where the same tasks need to be validated with different inputs. Doing so manually is time-consuming and a waste of QA resources. Automating these types of tasks ensures that all possible input scenarios are accurately covered in depth. 

Lastly, static and repetitive tasks are ideal for automated scripts. Why waste time having someone manually verify things that remain relatively unchanged from one cycle to the next? Letting automated test scripts handle these tasks frees your team to work on more important items.  

Instances When Manual Testing Makes More Sense

There are scenarios where human observation is more appropriate than an automated script. For example, when the goal is testing usability,  a manual approach is best. Humans learn more about user perspective during their evaluation. They can then use this knowledge to make recommendations on how to improve the user experience.  Also, they are good at catching things the system missed. It is not uncommon for someone to find things that were never addressed as a part of the automated scripts. Lastly, and perhaps most importantly, there is no substitute for the analytical observation skills required to evaluate complex systems. It may not be possible to evaluate certain features using automation. In those cases, a manual approach is necessary.  

In the end, the real question isn’t which is better than the other. The real question is, how can you capitalize on both to get the most effective results.

Automated Testing Hinders Human Interaction

Test automation doesn’t eliminate the need for human interaction. If anything, it requires more collaboration if the scripts are to be comprehensive and effective. Writing scripts can’t happen in a bubble. The most effective scripts result from a collaboration between the business analysts, developers and QA team. This sharing of ideas is what ultimately leads to a comprehensive suite of scripts that reflect the expanse of system functionality. 

It’s also important to remember that automated testing isn’t some magical set-it-and-forget-it tool that runs perfectly every single time. Scripts will need to be revised as requirements change. New cases will need to be developed as new features arise. All of this requires a feedback loop between every member of the team to keep everything in sync.  

We realize that not all projects are created equal. Our team will help you identify aspects of your system that lend well to automation and which do not. Based on that, we will devise a strategy and determine a framework that best suits your needs. With this approach, you won’t waste time and money on a ton of bulky features that you’ll never use.  

Our unique approach to test automation uses the latest in virtualization and cloud computing. As a result, we can scale our testing environment based on your needs with minimal impact to your project budget. Click here to learn more about how our test automation services and processes can help you release the highest quality software possible.  

Posted on

7 Reasons You Should Choose QualityLogic’s Software Testing Services over the Competition

Choosing a testing services company is one of the most vital decisions you make when you’re focused on developing first-rate software systems. It’s a decision that impacts many facets of your software development cycle, brand equity, and bottom line. 

Software testing is so critical. Millions of dollars have been lost due to software malfunctions. Remember the Equifax social security hack? Hackers exploited a bug that was a known web framework weakness. A patch had been available for about two months before hackers entered Equifax’s network. That hack resulted in 143 million stolen consumer records. Situations like this could be avoided by entrusting the testing of such systems to industry leaders, like QualityLogic.

In the spirit of the field of engineering, let’s be scientific, as we explore the reasons you should choose our company over the competition. First, we’ll explore the key criteria for choosing a software testing services company, and then, we’ll look at how our company measures up. 

So, let’s begin the “test.” Shall we? 

7 Factors to Consider in Choosing a Software Testing Services Company 

They include:

  1. Capacity and reputation 
  2. Service-level agreements
  3. Security and protection of intellectual property
  4. Communication, location, language, and time zone
  5. Engagement models
  6. Customization
  7. Cost savings

1. Capacity and Reputation

One could argue that capacity and reputation are the most fundamental criteria. Does the company you’re considering have the knowledge, skills, tools and experience to prove they can deliver? Can they truly walk their talk?  

Our company was founded in 1986 (a little over 32 years ago). That’s 32 years of testing experience! Over this period, we’ve worked with industry leaders and awesome start-ups and have built a knowledge base that’s rare, which we leverage in all our engagements. We have also developed some of the best testing tools in the industry. Our tools “are the de facto and official standards for verifying interoperability and conformance and formal certification testing in many industries.” 

Our core services include mobile & web application testing,  test automation,  API testing, and big data testing. Over the last three decades, we have perfected our process, so you can be sure you’re being offered the best testing services in the industry. Lastly, we have strategic alliances with some of the best brands in the IT and testing industry, this gives us the resource depth to confidently work on the most demanding projects. 

2. Service Level Agreement

The best brands leverage Service Level Agreements. They’re a contract that clearly delineates the deliverables a client expects from a service provider. It’s a smart way to go to ensure that just like the systems being tested; the testing project itself meets expectations. The following are key facets of a standard service level agreement. 

  • Detailed and lucid description of the service to be provided 
  • Reliability of the service 
  • Monitoring and reporting responsibility and metrics 
  • Procedure for reporting and dealing with problems 
  • Guarantees and warranties 
  • Consequences for not meeting deliverables 
  • Escape Clauses 

Our service level agreements contain all that and more. 

Having successfully completed hundredsof testing projects, we are confident we’d make yours a cost-effective, operationally successful project. Naturally, all our projects start with consultative calls and extensive meetings which enable us to draw up service level agreements with all our clients. These agreements are invaluable to the success of our projects. Be wary of testing companies that are sketchy on deliverables and the related imperatives.

3. Security and Protection of Intellectual Property

We have three world-class laboratories and seasoned testers and engineers who work with security and intellectual property protection as a priority on their minds. Security is integral to our processes to ensure the highest quality testing services.  

We have strict protocols that ensure that clients’ assets are hacker-proof and that they’re not copied, even by authorized personnel. We have information security standards and policies that undergird our work. We use cutting edge technology within our testing services to enforce the security of all our work. The confidence that their intellectual property is safe with us is one of the reasons we’re able to have industry leaders (such as Adobe, Hewlett Packard, and Microsoft) as clients. 

4. Communication and Location

We work as partners with our clients. The truth is that our passion is ensuring your products excel in your markets. We ensure regular, in-depth communication on all critical facets of ongoing software testing projects. Having both parties aligned early and throughout the project assures collective success.

software testing partner - qa company - working together

QualityLogic has offices exclusively located in the United States so we can be available when you are. Call us on the phone, send us an email, or Slack us. We’ll answer in minutes, not hours. We know that there are periods when time is of the essence, when issues must be quickly resolved, that is why we make sure you can always reach us.

5. Engagement Model

When it comes to our testing services, we are flexible in terms of the engagement model you’d like. Our solutions are scalable. Having worked successfully on hundreds of projects, we can advise on the ideal model. Naturally, you will decide on whether you’d like an incremental outsourcing model, where you outsource only parts of the project, or a total outsourcing model, where you’d want us to handle all facets of the project. By the time we’ve had a few consultative meetings, we’d be able to advise on what’s best. Would you like us to augment your in-house team? Or would you want a dedicated team focused on your projects? You’re in control at every phase of the project. 

6. Customization

There are key commonalities to many projects. After all, there are industry standards, best practices, and client methodologies/procedures we leverage. The reality is that each project is also unique and for the testing service to be successful there’s a need to pay attention to these peculiarities. We do not slap a one-size-fits-all solution on your project. We design solutions that work for you and your business.

7. Cost Saving

The cost of software testing services can be quite substantial, depending on the testing company you use. Testing is an extensive, painstaking process. If you’re not dealing with an expert testing service, the cost can quickly add up.

saving money with software testing services

We have world-class labs. We also developed some of the testing tools that have become industry standards and over three decades we’ve mastered the economics of testing and ours is one of the most cost-effective solutions in the market. Our approach is an integration of different test approaches, such as functional test, test automation, test tools and utilities to improve efficiency and cost performance. And, because you’d be dealing with the industry leader, there are no hidden charges; we only bill you when actual testing is being done. As a business ourselves, we know the impact of cost savings on your bottom line and work hard to optimize our testing processes.

Final Thoughts

We’ve explored some of the reasons why some of the best brands in the world rely on QualityLogic’s testing services. As you are aware, not all testing companies are created equal. We know you and your business deserve the best, and that’s why we want you to let us take care of you. 

We’ve been in business for over 30 years providing best-in-class testing services that cover a huge array of software applications, systems, and technologies. Our software testing services are flexible and scalable while being cost-effective. With our offices located in the United States you can rest assured that we can work together seamlessly without the constraints of language, and time-zones. Nothing is going to be lost in translation. We literally speak your language. In fact, you can come and say hello, at any time. 

We have helped in providing quality assurance services for hundreds of world-class brands. We’d be honored to add your company to our client list. 

Click here to schedule a call with one of our top testing services experts.

Posted on

What Are the Top 3 Qualities of a Successful Software QA Company?

You and your business have worked hard to create something new and innovative and as a professional, you know that attention to detail and consistency is at the heart of any great product. This should continue through to the quality assurance process. During the testing process, productivity depends on your ability to identify, isolate, and remedy bottlenecks in your workflow. Hiring a company to review your QA process and test your software or application is a choice made by many software companies, but the value that you receive from outsourcing QA can vary wildly. Your chosen Software QA company should think like you—it should be proactive about defining systemic vulnerabilities and eliminating unnecessary risk.   

In this post, we’ll show you the 3 qualities that allow a Software QA company to empower (rather than compromise) your business’s long-term strategy.  

A Software QA Company should have certain qualities to empower your business

1. They Understand Your Priorities

Certain things only come from experience. Understanding your priorities from a quality assurance perspective is one of them. A strong Software QA company will have the experience to understand your struggles and will know what it takes to get your priorities met. You should be able to benefit from their ability to work seamlessly with your internal teams and integrate with your agile development process — they should adapt to you, going the extra mile to deliver what you expect. You should be able to call them with an immediate need and have support right away. You need technology experts and world-class labs that feature a wide array of connected devices and state-of-the-art tools.

The right Software QA company will be able to provide this. It is what your business needs and deserves. It is the job of experienced QA engineers and testers to be the voice of the user. Go with the company that has been there, done that and has the track record to back it up.   

2. They Think Like a User

When it comes to the creation of software, developers think like developers and not always as a user might. This is why developers usually don’t make the best testers. As a Software QA company,  it is our job to put ourselves in the shoes of the users. Doing so opens you up to discovering screens that may not be intuitive and could possibility cause your users’ confusion or frustration. A Software QA company that grasps the importance of thinking like a user identifies opportunities to go through the user flow in a different way than one may have originally intended. As the Software QA company goes through the testing process, they uncover errors that no one knew existed like broken links, slow functionality and input errors.   

According to The 2018 World Quality Report 42% of respondents indicated that ensuring end-user satisfaction was a pivotal objective in QA testing software’s. Up 34% from last year solidifying this as proof to why a software QA company should think like a user. That is why at Quality Logic, we know that regardless of the expanse of your software or application feature set, usability inconsistencies can harm your brand and limit your product’s long-term viability. That’s why it is essential to choose a Software QA company that is just as focused on your long-term vision for your consumers as you are.  

3. They are a Team Player

You have spent all this time developing your project. The last thing you want to do is let it go. Choose a Software QA company that can integrate seamlessly with your team. A Software QA company should have a fundamental understanding of each of the roles in the team and, as such, be a ‘generalist by trade’. They need to know this broad understanding of the roles to be able to understand how projects and your products are progressing and, as a result, provide feedback on practices or processes that could be improved. And, above all, inform the team about whether the product under development is delivering a positive user experience.  

You should also ask yourself “Can I count on this company to help my brand scale at any pace?”, “Does this company guarantee performance?” and “Will my chosen company offer QA services that are affordable and comprehensive?”  

Whichever  Software QA company you choose, they should be there for your business and help it save money, get to market quicker, and eliminate costly mistakes. It’s essential to work with a Software QA company that is able to keep up with your brand. From empowering rapid deployments to debugging legacy products.   

 A successful Software QA Company with be a team player and support your business needs.

Choose the Right Fit for You

To sum it all up, you need to choose the right fit for you. Choose a Software QA company that empowers you and reinforces your business’s long-term strategy. A strong Software QA company with experience will put your business’s priorities first and has the track record to prove it. They should think like a user and contribute new ideas and perspectives to the project and ongoing strategy. Lastly, you want a team player. Pick the company that understands your industry and will become an extension of your team. They should understand your bottom line and help you to achieve yours goals. They should guarantee their work and always deliver.   

QualityLogic is Ready to Support You and Your Business’s Testing Needs

From conception to release, we are a Software QA company that offers agile QA services guaranteeing exceptional user experience and robust performance at each iteration of your product. Best yet, we integrate our services seamlessly with your existing QA process and use test models that have been proven in a broad range of enterprise software and application development environments.   

What sets us apart as a Software QA company is, we don’t lock you into a long-term contract. We don’t charge change fees. That’s because we view our clients as long-term partners and we endeavor to earn their business every day.  

At QualityLogic, we work with you to create a sustainable architecture for your success. Once we connect, we stay connected. We offer ongoing custom support for all our clients. If you need us, we’ll be there. We test multiple platforms using a variety of test solutions over many different industries. Ready to transform your QA process with a Software QA company that puts YOU first? Connect with us for a consultation and learn how effortless QA can be. 

Posted on

The California Rule 21 Conundrum: DER End-End Assurance

Led by California and the IEEE 1547 Work Group, the electric utility industry is rapidly developing a standardized method to communicate with and manage the growing penetration of DER assets at the distribution level. The California PUC, in conjunction with the IOUs and vendor community, has established a set of procedures for ensuring that the smart inverters meet both performance and communications requirements between utilities and the smart inverters.

This article looks at the current plans and how well they will ensure the desired performance of smart inverters under the direction of a utility DER management system.

While the test and certification procedures being put in place will make a huge difference in how well systems will interoperate and meet performance requirements, some additional testing will be required to ensure end-end interoperability and performance.

The CA Rule 21 Test and Certification Plan

CA Rule 21 specifies how distributed energy resources (DERs) such as Solar PV and battery storage interconnect to the grid. To address smart inverters, the updated Rule is organized into three implementation phases that correlate to three distinct parts of the testing and certification of the inverters and communications systems for California.

Two certification phases are currently available and mandated and the 3rd is coming in 2019.

  • Phase 1 is already in place and specifies a set of “autonomous” inverter functions that are tested and certified according to the UL 1741SA procedures.
  • The 3rd phase includes additional smart inverter functions requiring more intense communications and these will be tested and certified based on the pending IEEE 1547.1 test procedures for both the functions and the communications about these functions.

The overriding goal of these programs is to reduce the costs and time associated with adding and managing DERs for the benefit of the distribution system and its customers. Key to accomplishing this is the standardization of the functionality of smart inverters and the communications used to manage the DER assets. The goal of the testing and certification process is to ensure the intent of the utility is communicated to and performed correctly by the smart inverters.

Let’s dig a bit deeper into the current process to understand the likelihood of achieving these goals.

Tackling the End-End Problem One Step at a Time

While the goal is end-end validation of the behavioral intent of the inverters, the testing takes a building block approach. This makes sense in that there are so many potential use case scenarios and combinations of equipment, systems, aggregators, etc., that validating even a fraction of actual end-end possible implementations is an unmanageable task.

The building block approach is the way we test most complex systems today. For instance, we test printers so they can communicate over Wi-Fi and wired networks, but we separately test that computer systems can communicate using Wi-Fi. The industry has refined the tools, testing and certification so that you can take any Wi-Fi certified device and it will almost surely “plug and play” with any other Wi-Fi certified device even if the two devices have not been tested and certified together.

The Building Blocks to End-End Interoperability and Performance

The CA Rule 21 approach is to test the inverter functionality separate from the communications protocol. Inverter functionality is tested and certified by Nationally Recognized Test Labs (NRTL) accredited for certifying inverters to the UL 1741SA test specification. This covers the CA Rule 21 Phase 1 functions but only 2 of the 8 Phase 3 smart inverter functions. UL 1741SA does not address any of the required CA Rule 21 communications capabilities. Its purpose is to validate that the tested functions do indeed operate as intended.

Once the updated IEEE 1547.1 test standard for smart inverter functions is approved, smart inverters will be tested by the NRTLs using that test specification . IEEE 1547.1 will be more comprehensive and test additional Phase 3 functions (though not all).

While IEEE 1547.1 is primarily a functionality test comparable to (but more comprehensive than) UL 1741SA, it also adds for the first time a required communications capability using one of three designated protocols:

This is a huge step forward in that for the first time a hardware certification program will validate that sending instructions or information to a smart inverter in an industry standard protocol will indeed achieve the desired behavior of the system.

The significance of this for managing DER assets cannot be over-emphasized. It is a great starting point for gaining confidence in the interoperability AND performance of a smart inverter in a single certification program.

Does the CA Rule 21 Plan Guarantee End-End Performance?

One downside is that if an inverter is certified interoperable in one protocol, that does not mean that using a different protocol would work as well. For example, an inverter certified using SunSpec Modbus must have some form of protocol translator to convert an IEEE 2030.5 message into the SunSpec Modbus messages. Such a protocol adapter will need its own certification program at some point to ensure the conversions between IEEE 2030.5 and SunSpec Modbus produce the intended inverter results.

The CA Rule 21 Phase 2 mandated IEEE 2030.5 communications will be independently tested and certified using the SunSpec CSIP IEEE 2030.5 Test Specification and Program. This validates that the communications capabilities of the smart inverter, building EMS, or Aggregator can correctly exchange information and instructions with the utility DERMS systems using IEEE 2030.5.

However, this test and certification program does not validate that the actual inverter behavior conforms to the instructions in the IEEE 2030.5 messages. It validates the correct understanding of the message by the receiving party, but this assumes an accurate translation to the internal programming and behaviors of the physical inverter.

The theory is that if a utility DERMS is sending a 2030.5 message to a SunSpec CSIP certified inverter, building EMS or an Aggregator client, the message will be correctly interpreted, and the behavior of the controlled inverters changed to reflect the utility instructions. Thus, it should be possible for a utility to communicate with any UL 1741SA and SunSpec CSIP IEEE 2030.5 certified smart inverter, any SunSpec CSIP certified building EMS and any SunSpec CSIP certified Aggregator and the resulting inverter behavior will be as intended by the utility.

Interoperability challenges solved, right? More accurately, the interoperability challenges are being addressed and some of the necessary procedures put in place. While this greatly reduces the chances of interoperability issues, it does not guarantee that there won’t still be significant issues in the integrated system.

CA Rule 21 Testing: What’s Missing?

The CA Rule 21 plan is a huge step in the right direction and the SunSpec CSIP certification program will greatly increase the probability that certified systems will work together as intended. The elements in the building block approach are an absolute necessary step to achieve the end goals. Those building blocks start with the UL 1741SA testing (to be replaced with testing to the IEEE 1547-2018 when available) and then move to the SunSpec CSIP testing for the IEEE 2030.5 communications between utilities and inverters, building EMS or aggregator systems.

But there is not yet a formal testing and certification plan for the links between an inverter, aggregator or building EMS receiving an IEEE 2030.5 CSIP message and the actual inverter behaviors. And the actual test and certification programs, while extremely valuable, do not guarantee interoperability. Why is that?

The Certification Programs and Missing Elements

interoperability testing - certification programs - CA Rule 21

It is useful to think of test and certification programs as risk management tools. While the long-term goal of a certification program may be 99.99% interoperability, the reality of programs just starting up is that they are focused on the 20% of actual features and functions that make up 80% of the interactions between systems. This is because the amount of testing one could do to reach a 99% confidence level is usually impractical.

With unlimited time and resources, we could conduct comprehensive testing to certify a system, but the reality dictates finding the point of diminishing returns that let us balance costs versus quality.

A good example is the developing IEEE 1547-2018 interoperability testing. The interoperability testing is intended to demonstrate that one of the three named standards in the IEEE 1547-2018 standard (SunSpec, DNP3 or IEEE 2030.5) can be used to receive messages at the inverter and that those messages will result in the intended inverter settings and behaviors. A comprehensive approach would be to send messages that cover every possible function setting and combinations of settings and measure the resulting inverter behaviors. But this is more comprehensive than conducting a complete IEEE 1547.1 test and would take potentially months to complete.

Instead, recognizing that the vendors themselves conduct comprehensive testing and that testing a sample of potential inverter functions will increase the confidence that all functions are correct, the testing can be thought of as more of a spot check. If there are no problems with testing a sampling of behaviors using the communications protocol, then this should provide increased confidence that the inverter will interoperate successfully with other systems and perform per the instructions given it. If, on the other hand, some issues show up in the spot checks, then it won’t be certified, and the vendor will have information that allows it to do a deeper dive to understand specific and systemic issues with their implementation and fix them.

One Step Towards End-End Guarantees

An even better approach would be to use one of the named protocols to conduct all the specified IEEE 1547.1 functional tests. This would provide confidence that both the functions and the communications about those functions using SunSpec, DNP3 or IEEE 2030.5 behave as intended. The only downside is that this doesn’t validate that use of either of the other two non-tested protocols results in the same level of performance.

Assume we have a certified IEEE 1547-2018 inverter that includes validation of the IEEE 2030.5 messaging; a SunSpec CSIP IEEE 2030.5 certified inverter client and a SunSpec certified DERMS IEEE 2030.5 server. We plug them together and start sending instructions. If the instructions are only those that have been tested and certified, there is a high probability of successful interoperation.

However, there are cases that are not tested in the certification processes and may cause issues. For instance, if interoperability testing does not include tests for complex programs that include multiple inverter function setting changes in one message, there is a risk that such a message would result in an unexpected behavior.

When building EMS and aggregator systems are added to the mix, even more interoperability risks are created. A SunSpec CSIP certified Aggregator validates that it can correctly exchange information and instructions with a DERMS CSIP server, but it does not test at all that the instructions and information are correctly translated into whatever protocol it uses to communicate to an inverter. Even if the inverters are certified for IEEE 1547-2018, unless a specific test and certification step validating that the aggregator-inverter interface delivers the intent of the DERMS server message, a new risk has been introduced. Every time an IEEE 2030.5 message is translated into any other protocol (whether a standard or proprietary one), new possibilities for errors are created. And as of now, we don’t have a process in place or planned to address this issue.

What Can A Utility do to Insure End-End DER Performance?

It makes little sense for a utility to replicate all the testing that is already being required for CA Rule 21 acceptance. First and foremost, a utility can absolutely mandate that any DER inverters, building EMS systems or aggregators, and their own IEEE 2030.5 server pass the UL 1741SA/IEEE 1547-2018 and/or SunSpec CSIP IEEE 2030.5 certifications before even being considered for demonstrations, pilots or deployments of DER management systems.

Starting with these “building blocks”, the level of effort to reduce risk of interoperability problems becomes manageable. The new IEEE 1547.1-2018 test procedures include commissioning tests. These make sense for larger, one-off installations but are not very practical for any sort of large-scale deployment of smaller inverters. In this case, the most useful next step is to design and execute a use-case specific acceptance test process to be conducted internally or using a 3rd party lab with appropriate equipment and skills.

The primary focus of any utility testing program should be to ensure that its unique planned deployment scenarios are tested for the end-end system. If the deployment is focused on using behind-the-meter storage to manage excess solar PV through an aggregator, then the test design and test lab should be designed to validate that DERMS 2030.5 messages are implemented correctly in the end inverters. This would require:

  1. A clear specification of the use case including the types and nature of the messages and instructions – e.g., requests to the aggregator for storage and solar PV status updates; new ramp rates for storage; schedules of when to store and when to discharge into the grid; schedules or instructions to the PV inverters to change curves and settings.
  2. A grid simulator and a PV simulator along with measurement devices to capture both the inverter settings and electrical behaviors.
  3. A set of tests that implement the use case specification focused on the more complex scenarios such as multiple DER functional curves and setting changes in one message and scheduling of storage and PV inverter behaviors.

With such a test system in place, it would be relatively straight-forward to validate each of the candidate aggregator and DER systems from an end-end perspective.

Over time, this task should shrink as the IEEE 1547.1 certification is put in place and future end-end certifications are designed and implemented by the industry.

Leveraging CA Rule 21 Certifications for Each Utility

There is no question that the CA Rule 21 test and certification requirements already being put in place will go a long way to reducing the cost and time to implement an effective DER communications system. This will benefit utilities, vendors and consumers not just in California but globally since the standardization makes leveraging the investments being made a very attractive process.

For utilities with particularly unique or complex use cases, the CA Rule 21 procedures will make their specification and implementations tasks much easier. But there still will be cases where additional end-end testing will be required to gain the assurance required in relying on DER’s as a critical part of the grid.

Posted on

Continuous Testing: A Journey of Ongoing Improvement and Effort

Technology has fundamentally changed the way businesses operate. Remaining relevant and gaining the competitive advantage is top of mind for most technology business executives. One of the ways businesses are attempting to stay competitive is by accelerating their release cadence. Adopting the tenets of Agile or DevOps is making it possible to develop and release software at a level that meets user demands.

Calling this a drastic change from the waterfall method of taking months to create a new release and more months to test it is the very definition of an understatement. Note that this sudden dramatic acceleration of development and release cycles did not make testing go away.

The head-long release of new software versions goes out to a software marketplace that lives and dies on user reviews. The egregious defect that might not have been caught before a patch could be sent out can now drive hoards of negative user reviews within hours. Therefore, we now have Continuous Testing (CT) to go with the accelerated development processes listed above.

Test automation has taken a curious path from being a good idea, to a cure-all, to not worth what it costs, to a matter of marketplace survival. It started out as a UI-centric tool that could populate screen forms quickly and often and then actuate all the controls on that screen. But to support continuous testing for Agile/DevOps, test automation will need a substantial shift in focus.

Why Modern Development Methodologies Demand a Continuous Testing Approach

Quality as a concept is predicated on the careful, deliberate examination of the object under test. This has led to testing processes remaining stuck in the past even as organizations have invested heavily in high-speed development. Test teams have been showered with automation gone wrong horror stories and they want to avoid the pain for as long as possible.

continuous testing and devops adoption
Image Credit: Statista

That said, Agile and DevOps have been adopted by over 80% of the industry and that implementation absolutely requires Continuous Testing. As software delivery accelerates, testing cannot keep pace if it relies on manual processes. It is not possible for manual testing to provide Agile developers with the verification that high velocity releases require.

True Continuous Testing cannot be based on simply taking a manual quality process and adding in some automation. And, without a reliable QA safety net, Agile goes from a market advantage to a business risk.

Continuous Testing Gains Leverage

Agile QA testing is typically viewed as a progression from Unit Tests through Unit Integration on to System Integration ending at User Acceptance Tests. Continuous Testing pushes the primary focus toward the bottom of that list. UI based tests, typically performed in System Integration and UAT, require environments that are simply not available early in development. As these services and data sets are refined and finalized, any automated UI tests that are dependent on them begin to fail unless they are re-written to account for these changes.

The proliferation of APIs as the system communication method of choice has provided a handy lever for test automation. CT requires automation and drives toward testing at the API layer. APIs are considered the most stable interface type in the system. They are available earlier than UI controls and offer connection to back-end and third-party functionality that UIs do not. Best of all, they directly lend themselves to machine testing.

Automated API Testing

Two vital supports for API testing are Service Virtualization and Test Data Management. Test stubs for cloud-based services and detailed, quickly reset test data sets enable continuous early testing and facilitate defect reproduction. This moves discovery and correction closer to the start of the development process.

A set of sanity and regression tests that are triggered off code check-ins and builds should be used as release gates to guard against collateral damage. Too often, the rush to create features and fix defects causes damage to the system in unexpected places that have obscure connections to the code that changed. Risk coverage optimization through carefully maintained sanity and regression tests is the best prevention effort to avoid this.

Behavior Driven Development

One way to merge Continuous Testing into Agile/DevOps is to institute Behavior Driven Development. Too often this is confused with Test-driven development which focuses on the developer’s idea of how the software should work. Behavior is how the user expects the system to behave. As Agile development stories are created, use the Given, And, When, Then parameters to clearly specify what the behavior of the test, and by definition the code, is supposed to be.

This approach combined with test automation of the lower levels of the test tree should bring the best CT results with the least pain.

The Continuous Test Journey

Many QA, Development and Product managers have viewed this task with despair. A better approach is to realize that Continuous Testing is a journey rather than a destination. Continuous Testing goes hand in hand with continuous improvement and ongoing, consistent effort pays off:

Google’s John Micco says that they have 4.2 million individual tests running continuously both before and after code submission with 150 million test executions per day and 99% of all these test executions pass.