Settings and activity
9 results found
-
1 vote
Simon Born shared this idea ·
-
1 vote
Simon Born shared this idea ·
-
1 vote
Simon Born shared this idea ·
-
2 votes
Simon Born supported this idea ·
-
4 votes
Simon Born supported this idea ·
-
3 votes
Simon Born supported this idea ·
-
4 votes
An error occurred while saving the comment An error occurred while saving the comment Simon Born commented
We are currently tackling this topic and it is some headache. We run a decentralized experimentation setting.
Problem is on the one hand that flags stay in the code although not in use anymore. What is worse - they also stay in the datafile, thereby constantly increasing its size.
Finding out which flags are still in use and which are not is very complex in a decentralized experimentation setting with independent codebases.
The solution I could imagine on Optimizely side would be to send sampled pings whenever a flag is evaluated (eg. 1/1000). Then, there should be an option to auto-retire flags that have not been called for more than x days.
The "flag usage" events should not count against any budget (impression or MAU budget).
To make this even more efficient (and put less strain on clients executing Optimizely FX) would be that Optimizely does recognize the standard frequency a flag is called and automatically adjust the sampling per flag - as in essence one ping per day suffices. Thinking about it this way, the datafile could be automatically updated eg. every hour and make sure to only send pings for those flags that had not been seen that day yet.
I do find the point "Help customers understand how their users are reacting to a new feature rollout." strange. That is an A/B test and not a Targeted Delivery then.
Simon Born supported this idea ·
-
1 vote
Simon Born shared this idea ·
-
1 vote
An error occurred while saving the comment Simon Born commented
Just as a comment - we built this ourselves, sending events for ICP, INP etc. on each page, passing the values in Optimizely's "value" field
Referencing Sarah's answer here - my point was not about the performance of the variants connected with a flag (that would be an A/B test) but more about governance and usage of flags. In a decentralized setting, the tool is in danger of becoming cluttered.
One is blind to what flags are still being used which hinders cleaning up.