Settings and activity
9 results found
-
1 vote
An error occurred while saving the comment
Simon Born
shared this idea
·
-
10 votes
An error occurred while saving the comment
Simon Born
commented
Performance (ie. speed) of the search is a related topic that I currently find sub-par and should be improved.
Simon Born
supported this idea
·
-
1 vote
Simon Born
shared this idea
·
An error occurred while saving the comment
Simon Born
commented
I know that Optimizely tries to keep bucketing consistent when traffic allocation and or distribution changes.
It would be great to be able to see the bucketing allocation (eg. 1-5000 for A, 5001-10000 for B) in the interface.
In this example it is simple, but when ramping up (and possibly at the same time changing the distribution of traffic) it would be great to be able to verify via the interface which buckets are being set.
I am aware of Customer Profile Service but I see this as independent from that. -
1 vote
Simon Born
shared this idea
·
-
4 votes
Simon Born
supported this idea
·
-
4 votes
Simon Born
supported this idea
·
-
5 votes
An error occurred while saving the comment
Simon Born
commented
Referencing Sarah's answer here - my point was not about the performance of the variants connected with a flag (that would be an A/B test) but more about governance and usage of flags. In a decentralized setting, the tool is in danger of becoming cluttered.
One is blind to what flags are still being used which hinders cleaning up.An error occurred while saving the comment
Simon Born
commented
We are currently tackling this topic and it is some headache. We run a decentralized experimentation setting.
Problem is on the one hand that flags stay in the code although not in use anymore. What is worse - they also stay in the datafile, thereby constantly increasing its size.
Finding out which flags are still in use and which are not is very complex in a decentralized experimentation setting with independent codebases.
The solution I could imagine on Optimizely side would be to send sampled pings whenever a flag is evaluated (eg. 1/1000). Then, there should be an option to auto-retire flags that have not been called for more than x days.
The "flag usage" events should not count against any budget (impression or MAU budget).
To make this even more efficient (and put less strain on clients executing Optimizely FX) would be that Optimizely does recognize the standard frequency a flag is called and automatically adjust the sampling per flag - as in essence one ping per day suffices. Thinking about it this way, the datafile could be automatically updated eg. every hour and make sure to only send pings for those flags that had not been seen that day yet.
I do find the point "Help customers understand how their users are reacting to a new feature rollout." strange. That is an A/B test and not a Targeted Delivery then.
Simon Born
supported this idea
·
-
2 votes
Simon Born
shared this idea
·
-
1 vote
An error occurred while saving the comment
Simon Born
commented
Just as a comment - we built this ourselves, sending events for ICP, INP etc. on each page, passing the values in Optimizely's "value" field
Hey! So I got this answer (not sure why it is not showing up here):
We currently do offer "visitors over time" graph that is shown on the results page for unique count, total count, revenue per user and value per user metric types.
Are there any specific adjustments you have in mind that would help your team to debug SRM issues?
Yes, the "visitors over time" usually look s.th. like in the attached screenshot. However, even with experiments with an SRM, these graphs look very much the same, and the difference between visitor counts in A and B usually are not spottable with the naked eye.
So instead of showing the time series of users like this it would be better to show a distribution graph of users where one could identify kinks in the graph and from that know when an SRM hit.