Which parameters are considered by iridion Scoring?
Backend: How much time is needed for the backend implementation of the experiment?
Frontend: How much time is needed for the frontend implementation of the experiment (incl. the dependence on the testing tool and creating goals)?
Concept: How long does it take to create a good test concept for the experiment (including feedback loops, drafts, process descriptions, etc.)?
Additional: Are there other requirements that generate additional costs (e.g. quality assurance of the technical implementation, internal coordination rounds, political influence of stakeholders, etc.)?
Visual Contrast: Has there been sufficient change to the design/content in the experiment for the user to notice it?
Behavioral Contrast: Is there sufficient contrast in the variation to change the user’s perception or behavior?
Behavior Patterns: Does the experiment use a behavior pattern which changes the behavior of the users?
Traffic: Is the change part of the main conversion funnel? The closer to the experiment the conversion takes place, the more likely it is to generate validate uplifts (far away from the conversion = e.g. homepage / close to the conversion = e.g. in the checkout funnel).
Other scoring models
With iridion, you’re also able to fully customize your scoring depending on your needs. Or use an existing like PIE.