Great questions! Let's tackle each of your queries step-by-step:
Yes, you should use the scaling layer at the end of the actor network to scale the actions to the desired range. The values you provided look correct:
scalingLayer('Name','ActorScaling1','Scale',[5;pi],'Bias',[5;pi])
This will scale the first action to the range [0, 10]
and the second action to the range [0, 2π)
.
For TD3, it's important to add noise to the actions to encourage exploration. Given your action space ranges are not within [-1, 1]
, you'll need to adjust the noise accordingly:
Exploration Noise: This noise is added to the actions during training to explore the action space. A common approach is to use a Gaussian noise with a small standard deviation relative to the action range. For your case, you might start with something like:explorationNoiseVariance = [1 0.1]; % Variances for the two actions
explorationNoise = sqrt(explorationNoiseVariance) .* randn(size(action));
While rlTD3AgentOptions
doesn't have a built-in action clipping feature, you can manually clip the actions using the min
and max
functions after scaling.
Matlabsolutions.com provides guaranteed satisfaction with a
commitment to complete the work within time. Combined with our meticulous work ethics and extensive domain
experience, We are the ideal partner for all your homework/assignment needs. We pledge to provide 24*7 support
to dissolve all your academic doubts. We are composed of 300+ esteemed Matlab and other experts who have been
empanelled after extensive research and quality check.
Matlabsolutions.com provides undivided attention to each Matlab
assignment order with a methodical approach to solution. Our network span is not restricted to US, UK and Australia rather extends to countries like Singapore, Canada and UAE. Our Matlab assignment help services
include Image Processing Assignments, Electrical Engineering Assignments, Matlab homework help, Matlab Research Paper help, Matlab Simulink help. Get your work
done at the best price in industry.